In my previous post, I described the productivity issue our team was facing. How retrospectives did not work, and how I started looking at the improvement kata.
We had gone through the first phase of the improvement kata : set the end goal.
Generating enough profit for the business while sticking to a sustainable pace.
Time to start the second phase : Understand.
Where we really slower ? Or was it an illusion ?
When trying to understand, you have to start with the data you have. You continue digging until you get a deeper understanding of the situation.
We started with available data : story points and velocity. For sure this is a pretty bad measure of productivity. (Note : we should never use velocity for performance appraisal) In our case though, it felt like a good starting proxy measure.
Here is our velocity curve over 2 years.
It’s obvious that something changed. There are 2 parts to this graph. The velocity dropped between sprint 54 and 16.01. That’s a clue that our gut feeling was not completely wrong. Our productivity did change.
Our first hypothesis was that team members turnover was the cause. As with any team, some people came, and some people left. Let’s superpose the man days and velocity curves.
That could only explain part of the problem !
We tried to fine tune the man days curve. We took people’s involvement in tasks outside of programming into account. We used coefficients depending on the developers’ experience. That did not provide a better explanation.
We had to find another explanation.
As I said earlier, velocity is not a measure of productivity. Any change in the way we were computing velocity would impact this curve.
We had kept photos and Trello boards of our retrospectives meetings. We searched them through for anything that could impact velocity. Here is what we found :
At sprint 61, we started to do regular exploratory-testing. Exploratory testing discovers more bugs on top of user reported bugs. This made us spend more time on fixing bugs.
At sprint 62, as we opted for a No Bug policy we decided not to count story points for bugs
💡Keep Photos and Trello boards of Retrospectives as a log of your working agreements changes
The timings almost perfectly matched what we had observed in the first place. The question that came to our minds was :
Are we spending too much time on bugs ?
Halfway through understanding
This is how we started to dig into our situation. It’s a good time to give you a bit of feedback about how we felt at that point.
It was the first time we tried the improvement kata. More than that, we did not find any tutorial or guides about how to run it. The only instructions we had were theoretical descriptions or super concrete examples. We had to bridge the gap and come up with our own way.
To summarize, we felt a bit lost, we had gathered data from here and there, and we did not know what to look at next. On top of that, the quality of the data we were collecting was not great. We were wondering if we would get anything out of these investigations.
If you are serious about continuous improvement, you should learn the improvement kata.
Retrospectives are great to pick all the low hanging improvements. Once you’ve caught up with the industry’s best practices, retrospectives risk drying up. Untapped improvement opportunities likely still exist in your specific context. The improvement kata can find those.
Here is how we applied the improvement kata to gain 25% of productivity in my previous team.
Thanks to repeated retrospectives, the team had been improving for 2 years. Retrospectives seemed like a silver bullet. We would discuss the current problems, grasp an underlying cause and pick a best practice. Most of the time, that fixed the problem.
Sometimes it did not work though. Even if the issue came back in a later retrospective, it would not survive a second scrutiny. In the previous two years, the team had transformed the way it worked. It adopted TDD, incremental refactoring, pair programming, remote work, automated performance testing and many others.
Lately though, things did not work so well. The team was struggling with productivity issues. The team was not slowing down, but the scope and complexity of the product had grown. Features were not getting out of the door as fast as they used to. We had the habit of prioritizing improvements and bug fixes over features. That used to improve the flow enough to get more and more feature done. It did not seem to work anymore.
We tried to tackle the issue in retrospectives. We would change the way we prioritized features … To be later bogged down by bugs, technical debt or bad tools. We would discuss that in retrospective, and decide to change the priorities again … The loop went on and on a few times.
We were getting nowhere.
The improvement kata
That’s why I started to look for other ways to do continuous improvement. I stumbled upon a book called Small Lean Management Guide for Agile Teams. The book is in french, but I wrote an english review. I fell in love with the way the authors dug into the hard data of how they worked to understand and fix their problems.
To learn more about this technique, I read Toyota Kata. It details two management tools used at Toyota : the improvement and coaching katas. Some say these are Toyota’s special weapon. The thing that grew them from a small shop to the largest car manufacturer in the world.
They are katas because they are routines. They must be re-execute many times. The improvement kata should improve the flow of work. The coaching kata helps someone (or yourself) to learn the improvement kata. Every time we go through the kata, we also understand it better.
Here is how the improvement kata goes :
Describe your end goal
Understand where you stand about this goal by measuring facts and data
Based on your end goal and the current situation, define where you’d like to be in 3 months or less
The coaching kata is a way to coach someone into applying the improvement kata. The fun thing is that you can coach yourself ! The idea is to ask questions to the coachee to remind him of where he stands in his improvement kata.
What do you actually mean by throwing code away ? Does it mean stoping unit testing and refactoring ?
So I guess it deserves a bit of explanation.
What is it all about ?
When programming, I don’t throw code away a lot. I tend to rely on my automated tests to change the code I already have. That might be a problem.
As with everything, there is no one size fits all. We should choose the best practice for our current situation. Same thing applies for incremental changes versus rewriting.
TDD makes incremental changes cost effective, and as such, is a key to get out of the Waterfall.
The idea of throwing code away frequently is to make rewriting cost effective, so we can do it more often.
Why would we want to do so ?
In “When Understanding means Rewriting”, Jeff Atwood explains that reading code can be more difficult than writing it. There is a point where rewriting is going to be cheaper than changing.
The more unit test you have, the later you reach that point. The more technical debt you take, and the sooner. The bigger the part to rewrite, the more risky it becomes.
Let’s imagine you knew a safe way to rewrite the module you are working on. You could be faster by taking more technical debt and writing less unit tests ! Mike Cavaliere framed it as F**k Going Green: Throw Away Your Code.
This would be great for new features, that might be removed if they don’t bring any value. It would also be a good way to get rid of technical debt. Naresh Jain also makes the point that without tests, you’ll have to keep things simple (here and here) !
TDD and unit testing is a cornerstone of XP. If we remove it, we need something else to build quality in. Could Mob Programming and Remote teams do the trick ?
“Given enough eyeballs, all bugs are shallow”. Pair programming and code reviews catch a lot more bugs than solo programming. Only a few bugs are likely to pass through the scrutiny of the whole team doing mob programming.
What about Remote ? Martin Fowler explains that remote teams perform better by hiring the best. Skills of programmers have been known for a long time as a main driver of software quality.
Finally, the Cucumber team reported that Mob Programming works well for remote teams.
How to make this safer ?
Even with the best team, mistakes will happen. How can we avoid pushing a rewrite that crashes the whole system ?
The main answer to that is good continuous deployment. We should deploy to a subset of users first. We should be able to rollback straight away if things don’t work as expected.
As the system grows, microservices can keep continuous deployment under control. We can deploy, update and roll them back independently. By nature, microservices also reduce the scope of the rewrite. That alone, as we said earlier, makes rewrites safer.
As a last point, some technologies make building microservice systems easier and incremental. The Erlang VM, for example, with first class actors, is one these. Time to give Erlang and Elixir a try !
Is this always a good idea ?
There are good and bad situations.
For example, a lean startup or data driven environment seems a good fit. Suppose your team measures the adoption of new features before deciding to keep or ditch them. You’d better not invest in unit tests straight away.
On the other side, software for a complex domain will be very difficult to rewrite flawlessly. I have worked in the finance industry for some years, I know what a complex domain is. I doubt I could rewrite a piece of intricate finance logic without bugs. I would stick to DDD and unit testing in these areas.
How to dip your toe
Here is how I intend to dip my toe. I won’t drop automated testing completely yet. What I could do instead (and that I already did on side projects) is to start with end to end tests only.
From there every time I want to change a piece of code, I have many options :
Add a new end to end test and change the code.
If the code is too much of a mess, I can rewrite it from scratch. I’ll have the safety of the end to end tests.
If I see that the module is stabilizing, has not been rewritten for a while, and yields well to modifications. I could start splitting end to end tests into unit tests, and embark on the TDD road.
Maybe later, when I have a team doing mob programming to find my mistakes, we’ll skip the end to end tests.
As a team starts doing continuous deployment, devs get more and more involved in ops. This privileged view on the users’s behaviour can turn devs into product experts. Why not push the logic further and make them the product guys as well ?
Test in prod
Continuous deployement opens up many opportunities. Deploying safely requires bulletproof rollbacks. Once devs take care of product, code and ops they can shortcut testing and directly test in prod with some users ! Rollback remains an option at any time.
That seems like wishful thinking. The idea is to fix bugs as soon as they appear, but also to prevent them in the first place. Continuous Deployment requires great engineering practices, which enable this way of working. A story cannot be “finished” until test in prod is over, and bugs fixed.
In Getting Real, basecamp guys said that their default answer to any new feature request was “No”. Only after the same thing got asked many times would they start thinking of it. Maintaining a long backlog is a waste of time. Dumping backlog items but the current ones saves time. Important ideas will always come up again later.
This one is famous already. Some teams have saved time by using story count instead of story points. What’s the point anyway if the team is already :
working as best as it can
on the most important thing
and deploying as soon as possible
This is how to drive the system evolution. Instead of relying on projects, backlogs and predictions, use data. Data from user logs and interviews proves if a new feature is worth building or dumping. Exploring logs of internal tools can help to continuous improve.
Lean startup & Hackathons
Incremental improvements, in either product or organization, is not enough. As Tim Hardford explained in Adapt, survival requires testing completely new ideas. Lean startup & hackathons experiments can do just that.
Improvement kata is a way to drive long term continuous improvement. It’s the main tool for that at Toyota (read Toyota Kata to learn more). It provides more time to think of the problem than a retrospective. It also fits perfectly in a data driven environment.
Pair programming is great for code quality and knowledge sharing. Mob programming is the more extreme version of pairing where all the team codes together.
An alternative to refactoring with unit tests is throwaway and rewrite once it gets too bad. Companies have been doing that for years. I worked at a bank that used to throwaway and rewrite individual apps that were part of a larger system. It can be a huge waste of money if these sub systems are too large. Scaling down to individual classes or microservices could make this cost effective.
What’s striking from this list is that it’s not that far from the original XP ! For example, continuous deployment and generalists have always been part of it. Another point is that is not science fiction ! I found many teams reporting success with these practices on the internet ! A well-oiled XP team might very well get there through continuous improvement.
The more I look at it, the more XP stands as a unique lighthouse in the foggy Agile landscape.
As for me, I’m not sure I’d dare to swap TDD & refactoring for throwaway & rewrite. I’m particularly worried about the complex domain specific heart of systems. Nothing prevents from using both strategies for different modules though.
I should give it a try on a side project with a microservice friendly language, such as Erlang or Elixir.
Do you remember how people who are not used to the phone tend to shout in it, for the message to get far ? Read on and I’ll explain how this silly habit will make your remote retrospectives great !
A typical retrospective starts with an energizing activity, or energizer. It’s important for two reasons. First, people who don’t speak during the first 5 minutes of a meeting are more likely to remain silent until the end. Second, getting everyone to do an energizing and fun activity sets the tone for a peaceful and creative time.
Our experiences with remote energizers
When we started to do retrospectives at work, all the team was co located in Paris. There are tons of activities available on the internet to run effective energizers. We could do games like Fizz Buzz, or drawing based activities like face drawing and visual phone. It was easy and fun.
A few years ago, Ahmad Atwi joined our team from Beirut. Our catalog of energizer shrank to a few activities that we could run remotely. On top of that, going through the remote medium made it more challenging for energizers to … energize ! With time and trial, we managed to understand what works and how to pick the right energizer for a remote team.
Principles for remote energizers
We have an Agile Special Interest Group at Murex, where volunteers meet to share things they find interesting. A few weeks ago, during one of these sessions, we discussed remote energizers in a Lean Coffee.
If they are enough teammates at every place, energizers that play in small groups will work well. For example, it would be easy to organize a balloon battle or a back to back.
It’s also easy to use variations on an activity that proved effective. For example, explore new kinds of questions. It’s even ok to repeat verbatim an activity from time to time.
Replace energizing by team building. Team building is particularly important for remote teams. Instead of engaging activities, it’s ok to have everyone share a personal anecdote. By knowing each other better, the team can build trust. For example, you could introduce such an activity with : “What book would you bring on a desert island ? Why ?”
One last thing we came up with my colleague Morgan Kobeissi to energize a remote meeting is to YELL. The idea is to ask everyone to answer a question while standing and yelling. A question could be “How long have you been working and what companies did you work for ?”
Remote work is here to stay. More and more teams are facing similar difficulties. We need to invent new work practices. If you discovered new ways to run remote energizer, please share them with a comment.
Don’t worry if your unit tests go to the DB, that might not be so bad.
When I started writing unit tests, I did not know what these were. I read the definition, and strived to follow the recommandations :
they should be independent from each other
they should not access the DB
they should not use the network
they should only cover a small scope of your code
I started to write unit tests on my own and became test infected pretty fast. Once I got convinced of the benefits of unit testing, I tried to spread the practice around me. I used to explain to people that it is very important to write real unit tests by the book. Otherwise, Bad Things would happen …
How I changed my mind
A few years ago, I spent a few years working on a Rails side project called mes-courses.fr. I was using a small test gem to enforce that no unit tests were accessing the db. I had to write a lot of mocks around the code. I ended up hating mocks : they are too painful to maintain and provide a false sense of security. I’m not alone in this camp, check DHH’s keynote at RailsConf 2014.
At some point, the mock pain got so bad that I stopped all developments until I found another way. I found a pretty simple workaround : use in-memory SQLite. I got rid of all the DB access mocks. Not only were the tests easier to write and maintain, but they were as fast as before, and they covered more code.
That changed something fundamental in my understanding of testing
It’s all about speed baby
The only thing that makes unit tests so important is that they run fast.
Unit tests as described in the literature run fast. Let’s see what happens when you remove one of the recommandations for unit tests.
If tests depend on each other, their outcome will change with the execution order. This wastes our time in analyzing the results. On top of that, independent unit tests are easy to run in parallel, providing an extra speedup. We lose this potential when our tests are dependent.
Tests that rely on an out-of-process DB run slower. Tests need to start the DB before anything else. Data needs to be setup and cleaned at every test. Accessing the DB implies using the network, which takes time as well. There’s also a risk of making the tests dependent by sharing the same DB. A last issue is troubleshooting the DB process when things don’t work.
Tests that use the network are slow too ! First, Network is slower than memory. Second, data serialization between processes is slow as well. Finally, these tests are likely to use some form of sleep or polling, which is slow, fragile, or both !
Finally, there is always a scope past which a test will be too slow.
This means that not only unit tests are fast, but also that fast tests usually show the features of unit tests.
My guess is that ‘unit tests’ were explicitly defined as a recipe for fast tests ! If you stick to the definition of unit tests, you’ll get fast tests and all their benefits.
That also means that we should focus first on having fast tests rather than unit tests. Here is my real check to know if tests are fast enough :
Is the build (and the tests and everything) less than 10 minutes ?
Can I continuously run my tests while coding and stay in the flow ?
If both answers are yes, then I won’t question myself too much whether my tests are unit, integration or end to end.
So what ?
I’ve been experimenting with these heuristics for some time. Side projects are great for experimenting since you don’t have a team to convince ! Here are my main takeaways :
Stick to end to end tests at the beginning of your project. They are easy to refactor to finer grained tests later on.
In-memory DBs are great to speed tests up without wasting your time with mocking. We can use a unique DB for every test to keep them independent.
Large scope tests are not an issue provided 2 things.
The code contains very few side effects.
It provides good exceptions and assertions messages
On the other side, there are things that I still recommend :
Independent tests are easy to write from the beginning, difficult to fix later on. As they save a lot of headaches in diagnostic, I stick to them from the start.
Avoid network, it makes the tests slow, fragile and tricky to diagnostic. But please, read this before jumping to mocks.
These rules have served me well, particularly in my side projects, where I don’t have a lot of time. What about you ? Do you have your own testing rules ?
We write tons of legacy code everyday. Experienced developers understand that legacy code is not something special. Legacy code is our daily bread and butter.
Should we abandon all hope as we enter legacy code ? Would that be professional ? In the end, code is only a bunch of bytes, somewhere on a drive. We are the software professionals. We need to deal with that.
1. Master non legacy refactoring first
Please calm down before this “Bring ‘em out” energy goes to your head.
I did not say that refactoring legacy code is easy. Legacy code can bite … bad. I’ve been in teams which literally spent nights fixing a bad refactoring gone to production …
Before you can refactor legacy code, you need to be good at refactoring new code. We all learned to swim in the shallow pool, it’s the same with refactoring. Mastering green code refactoring will help you when tackling legacy code.
First, you’ll know the ideal you’d like to get to. Knowing how productive a fast feedback loop is will motivate you to keep on refactoring.
Second, you’ll have a better idea of the baby steps to take you through a tricky refactoring.
If you are not yet at ease with greenfield refactoring, have a look at my previous post.
2. Understand that refactoring legacy code is different
Unfortunately, trying to force push unit tests in legacy code usually results in a mess. It introduces lot’s of artificial mocks in a meaningless design. It also creates brittle and unmaintainable tests. More harm than good. This might be an intermediate step, but it is usually not the quickest way to master your legacy code beast.
Here are alternatives I prefer.
3. Divide and conquer
This is the most straightforward way to deal with legacy code. It’s an iterative process to repeat until you get things under control. Here is how it goes :
(1) Rely on the tests you have, (2) to refactor enough, (3) to test sub-parts in isolation. (4) Repeat until you are happy with the speed of the feedback loop.
Depending on the initial state of your tests, this might take more or less time. Your first tests might even be manual. This is the bulldozer of refactoring. Very effective, but slow.
Changing legacy code is a lot easier when you team up. First, it creates a motivating “we’re all in this together” mindset. Second, it guards us against silly mistakes.
Mob programming, might seem very expensive, so let me explain why it is not. Suppose you want to introduce some tests in a tricky section of code.
With mob programming, all the team gathers for half a day to work on this change. Together, they find and avoid most of the pitfalls. They commit a high quality change, which creates only one bug down the road.
Let’s see the alternative.
Using solo programming, a poor programmer tries to tackle the change all by himself. He spends a few days to understand and double check all the traps he can think of. Finally, he commits his change, which results in many bugs later on. Every time a bug pops up, it interrupts someone to fix it ASAP.
The savings in interruptions are greater than up front cost of mob or pair programming.
A software seam is a place where you can alter behavior in your program without editing in that place.
This is one of the many interesting things I learned from Michael’s book about legacy code.
Object polymorphism is only one kind of seam. Depending on your language, many other types of seams can be available.
Type seam for generic languages
Static link seam for static libraries
Dynamic link seam for dynamic libraries
Finding seams in your program is something opportunistic. Keep in mind though that testing through seams is not the end goal. It is only a step to bootstrap the test-refactor loop and start your refactoring journey.
6. Mikado Method
How do you get to your end then ? How to you refactor only what’s useful for your features ? How do you do large refactorings in baby steps ?
Over time, I found that the mikado method is a good answer to all these issues. The goal of the Mikado Method is to build a graph of dependent refactoring. It can then use it to perform all these refactorings one by one. Here is the mikado method by the book.
Before anything else, you’ll need a large sheet of paper to draw the graph. Then repeat the following :
try to do the change you want
If it builds and the tests pass, great, commit and you’re done
Otherwise, add a node for the change you wanted to do in your mikado graph
Write down the compilation and test errors
Revert your change
Recurse from 1 for every compilation or test error
Draw a dependency arrow from the nodes of errors to the node of your initial change
Once you built the full graph, tackle the refactorings from the leaves. As leafs have no dependencies, it should be easy to do and commit them.
When I first read about the mikado method, it seemed very simple and powerful. Things got more complex when I tried to apply it. For example, the fact that some changes don’t compile hide future test failures. That means that very often, the “Build the graph” and “Walk the graph” phases overlap. In real life, the graph evolves and changes over time.
My advice about the Mikado Method is not to take it to the letter. It’s a fantastic communication tool. It helps not to get lost and to avoid a refactoring tunnel. It also helps to tackle refactoring as a team.
It is not a strict algorithm though. Build and tests are not the only way to build the graph. Very often, a bit of thinking and expert knowledge are the best tools at hand.
7. Bubble Context
Refactoring needs to be opportunistic. Sometimes there are shortcuts in your refactoring path.
If you have access to a domain expert, the Bubble Context will cut the amount of refactoring to do. It’s also an occasion to get rid of all the features that are in your software but that are not required anymore.
The Bubble Context originated from the DDD community, as a way to grow a domain in an existing code base. It goes like that :
I have friends who are fans of the bubble context. It is super effective provided you have a domain expert. It is a method of choice in complex domain software.
Bubble Context works great when refactoring domain specific code, what about the rest ? I had good results with the Strangler pattern.
For example, we had to refactor a rather complex parser for an internal DSL. It was very difficult to incrementally change the old parser, so we started to build a new one aside. It would try to parse, but delegate to the old one when it failed. Little by little, the new parser was handling more and more of the grammar. When it supported all the inputs, we removed the old one.
The strangler is particularly well suited for refactoring technical components. They have more stable interfaces and can be very difficult to change incrementally.
9. Parallel Run
This is more of a trick than a long term strategy. The idea is to use the initial (legacy) version of the code as a reference for your refactoring. Run both and check that they are doing the same thing.
Here are some variations around this idea.
If the code you want to refactor is side effect free, it should be easy to duplicate it before refactoring. This enables running both to check that they compute the same thing.
Put this in a unit test to bootstrap a test-refactor loop. You can also run both in production and log any difference. You’ll need access to production logs … Devops teams have a refactoring advantage !
Here is another use of your logs. If the code writes a lot of logs, we can use them as a reference. Capture the logs of the old version, and unit test that the refactored version prints the same logs out. That’s an unmaintainable test, but good enough to bootstrap the test-refactor loop.
You don’t need to refactor dead code ! Again, access to production logs is a great advantage for refactoring.
Add logs to learn how the real code runs. If it’s never called, then delete it. If it’s only called with some set of values, simplify it.
No silver bullet
That was a whirlwind tour of the legacy code refactoring techniques I know. It’s no promise that refactoring will become easy or fast. I hope it is a good starting point to set up and walk a refactoring plan.
In my last post, I explained why incremental refactoring techniques will make you both more productive and relaxed.
As anything worth its salt, the path to full mastery is long and requires dedication. The good news is that you’ll start to feel the benefits long before you are a master.
The quickest thing that will get you up to speed is dedicated practice. Take some time to do some exercices outside of any ‘production’ code.
TDD Coding Katas
The most famous practice to learn TDD also works very well to learn refactoring. That shouldn’t be a surprise as incremental refactoring is an integral part of TDD.
There are many ways to do your first coding kata. You could find a coding dojo near you (ask meetup.com). Or you could find motivated colleagues to start one at your company … I wrote in more details about how to attend a coding dojo in this post.
You can also practice katas on your own. My friend Thomas Pierrain rehearses the same katas to discover deeper insights.
The goal of incremental refactoring is to keep the code production ready all the time. Smaller commits is one happy consequence of that.
You can stretch your refactoring muscles by doing coding katas and keeping the code compiling all the time. You’ll need to master your IDE and its automated refactoring. Most of all, it will shift your attention from the goal to the path !
I learned at SPA conference that we call this ‘Refactoring golf’. The name comes from Golf contests, popular in the Perl community. Their goal is to write the shortest program possible to do a specific goal. The goal of a Refactoring Golf is to go from code A to code B in the fewest transformations possible.
Real mastery does not come by practice alone. Studying theory alongside practice yields deeper insights. Theory enables to put your practice into perspective and to find ways to improve it. It saves you from getting stuck in bad habits. It also saves you from having to rediscover everything by yourself.
Develop your design taste
In Taste for Makers Paul Graham explains why taste for is fundamental to programming. Taste is what allows you to judge if code is nice or bad in a few seconds. Taste is subjective, intuitive and fast, unlike rules which are objective but slower. Expert designers use taste to pinpoint issues and good points in code on the spot.
Within the fast TDD – Refactoring loop, taste is the tool of choice to drive the design. Guess what : we can all improve our design taste !
Code smells are the first things to read about to improve your design taste. Once you know them well enough, it will be possible to spot things that might need refactoring as you code.
Spotting problems is nice, but finding solutions is better ! Design Patterns are just that … There has been a lot of controversy around Design Patterns. If overusing them leads to bloated code, using them to fix strong smells makes a lot of sense. There is even a book about the subject :
Finally, there’s a third and most common way to improve our design taste. It’s to read code ! The more code we read, the better our brain becomes at picking small clues about what nice and what is not. It’s important to read clean code but also bad code. To read code in different languages. Code built on different frameworks.
So, read code at work, read code in books, read code in open source libraries, good code, legacy code …
The book is said to be a difficult read, but the content is worth gold. If you have the grit, give it a try ! At the end, you should understand how your IDE does automated refactoring. You should also be able to perform all the refactorings that your IDE does not provide by hand ! This will enlarge your refactoring toolbox, and help you to drive larger refactorings from A to B.
Develop a refactoring attitude
Practice makes perfect. Whatever our refactoring skill, there is something to learn by practicing more.
Make it a challenge
As you are coding, whenever you find a refactoring to do to your code, make it a challenge to perform it in baby steps. Try to keep the code compiling and the tests green as much as possible.
When things go wrong, revert instead of pushing forward. Stop and think, try to find a different path.
If you are pairing, challenge your pair to find a safer track.
This might delay you a bit at first, but you’ll also be able to submit many times per day. You’ll see that your refactoring muscles will grow fast. You should see clear progress in only 1 or 2 weeks.
Team up against long refactorings
If your team prioritizes a user story that will need some re-design, try to agree on a refactoring plan. The idea is to find a coarse grain path that will allow you to commit and deliver many times. This plan might also help you to share the work on the story.
Having to question and explain your assumptions will speed up your learning.
Refactoring is most useful with bad legacy code. Unfortunately, it also where it is the most difficult. Next week’s blog post will be about what we can do to learn how to refactor legacy code.
Incremental refactoring is key to make releases non-events ! As early as 2006, using XP, we were releasing mission critical software without bugs ! We would deliver a new version of our software to a horde of angry traders and go to the movies without a sweat !
What’s so special about incremental refactoring ?
Avoid the tunnel effect
Mastering incremental refactoring techniques allows to break a feature down to baby steps. Not only smaller commits, but also smaller releases ! You can deploy and validate every step in production before we move to the next !
Small releases are also a lot easier to fix than big bang deployments. That alone is a good enough reason to deploy in baby steps.
There are a lot of other advantages to small deployments. Merges become straightforward. Someone can take over your work if you get sick. Finally, it’s also easier to switch to another urgent task if you need to.
When you know that you will be able to improve your work later on, it becomes possible to stick to what’s needed now. After spending some time working on a feature, it might turn out that you delivered enough value. Between enhancing this feature and starting another one, pick the most valuable. Don’t be able to switch. Incremental refactoring, makes it easy to resume later on if it makes sense.
Real productivity is not measured through code, but through feature value. This explains why incremental refactoring is more productive than up-front / big-bang development.
Know where you stand
As you’ll work through your feature, you’ll have to keep track of the done and remaining steps. As you go through this todo list and deliver every successive step, you get a pretty clear idea of where you stand. You’ll know that you’ve done 3 out of 7 steps for example. It helps everyone to know what’s the remaining work and when you’ll be able to work on something else.
A few times, I fell in the trap of features that should have taken a few hours and that lingered for days. I remember how stupid I was feeling every morning, explaining to my colleagues that it was more complex than I had thought, but that it should be finished before tomorrow … Learning incremental refactoring techniques saved me from these situations.
Deliver unexpected feature
Incremental refactoring techniques improves the code. As a systematic team wide effort, it keeps the code healthy and evolutive. When someone requests an unexpected feature late, you’ll be able to deliver it.
This should improve your relation with product people. They will be very happy when you build their latest idea without a full redesign.
Joel Spolsky wrote a long time ago that rewriting a large piece of software is the number 1 thing not to do ! All my experiences in rewriting systems have been painful and stressful.
It always starts very rosy. Everyone is feeling very productive with the latest tools and technologies. Unfortunately, it takes a lot of features to replace the existing system. As always with software, the time estimates for the rewrite are completely wrong. As a result, everyone starts grumbling about why this rewrite is taking so long. The fact that the legacy system is still evolving does not help either. Long story short, the greenfield project ends up cutting corners and taking technical debt pretty fast … fueling the infamous vicious circle again.
Incremental refactoring techniques offer an alternative. It enables to change and improve the architecture of the legacy system. It looks longer, but it’s always less risky. And looking back, it’s almost always faster as well !
Ease pair programming
eXtreme Programming contains a set of practices that reinforce each other. As I wrote at the beginning, refactoring goes hand in hand with TDD. Pair programming is another practice of XP.
TDD and Refactoring simplify pair programming. When a pair is doing incremental refactoring, they only need to discuss and agree on the design at hand. They know that however the design needs to evolve in the long term, they’ll be able to refactor it. It’s a lot easier to pair program if you don’t have to agree on all the details of the long term design …
In turn, pair programming fosters collective code ownership. Collective code ownership increases the truck factor. Which reduces the project risks and makes the team’s productivity more stable. In the long run, this makes the work experience more sustainable and less stressful.
Simplify remote work
Refactoring will also save you from the commutes and allow you to work closer to the ones you love !
Refactoring techniques enable small commits. Small commits simplify code reviews, which are key to remote or distributed work. Even if you are doing remote pair programming, small commits help to switch the control between buddies more often.
To be continued
I hope that by now I persuaded you to learn incremental refactoring techniques. My next post will dig into the details about how to do that.
To summarize, we had the chance to attend a lot of very interesting sessions during the 3 days of the conference. Here are 5 pearls of wisdom I took back with me.
What connascences are
Identifying code connascences helps to rank refactorings and keep the system maintainable.
Continuous refactoring is one of the core practices of XP. For me, knowing what to refactor next has been a matter of code smells, discussing with my pair and gut feeling.
A connascence is a coupling between parts of the system. Two parts of your code are connascent if changing one implies changing the other. For example, a function call is connascent by name with the function definition. If you change one, you need to change the other.
Connascences are more formal than code smells. We can detect and rank them to pick the most important refactoring to do. People have listed 9 types of connascences. Some are visible in the source code, others are dynamic and difficult to spot before runtime.
The lowest form of connascence is ‘of name’, like in the function call example above. The worst form is ‘of Identity’, when different parts of the system must reference the same object.
The higher the connascence, the more difficult it is to evolve the parts involved. Instead of relying on intuition, you can use a connascence based refactoring algorithm :
It was reassuring to hear Nat saying that “As we gain experience, we are not expected to know everything”. Pairing with developers out of college is an occasion to “exchange” skills. Hard learned design skills versus updates on the latest technologies.
I also learned about the Expert’s Amnesia and why experts often have a hard time teaching. Expert level knowledge is by nature instinctive. At this level of skill, it becomes very difficult to detail the logic of things that seem obvious.
We engineers are more mentors than coaches
In the first XP book, there were only 3 roles in the team : team members, on site customer and XP coach. The XP coach should be a developer who can help the team to learn the practices and principles of XP.
About the same time, the personal or professional coach jobs appeared. The Scrum Master, is to Scrum what the XP coach is to XP, without the developer part. Remember the joke “Scrum is like XP, without everything that makes it work” (Flaccid Scrum).
It looks like the Agile Coach job title appeared out of all this. The problem is no one exactly knows what this is. Should he be an experienced developer like the XP coach ? A great people person ? Or someone good at introducing change ? or a mix of these ?
There is no knowledge transfer from the coach to the coachees ! On the other side, a mentor does transfer knowledge to his mentees. The coach helps his coachee take a step back and take decisions in full consciousness. The goal of the mentor is to inspire and train to new techniques.
I’m fine being a mentor and not a coach ;–)
Servant leaders need to be tough at times
We hear a lot about servant leadership nowadays. Scrum Master should be servant leaders, as well as managers in agile organizations.
Angie Main gave a very interesting session about servant leadership. She made an interesting point I had not heard about before. We all know that servant leaders should trust the team to get the work done most of the time. In spite of that, servant leaders must also be ready to step in and remove people who don’t fit in and endanger the team !
This reminded me of what Jim Collins says in Built to last : “People who don’t fit are expelled like viruses !”
1/3000 ideas succeeds
Thanks to Ozlem Yuce’s session, I learned about the “Job To Be Done” technique to understand the customer’s real needs.
Studies measured that only 1 idea out of 3000 ends up as a successful product ! Here seems to be the original research.
I’ll remember this fact next time I’m ask for a funky feature !
At the end, we had a very good time at SPAconference. The talks were insightful, we had interesting discussions, the premises were comfortable and on top of that, food was great !