Forget Unit Tests, Only Fast Tests Matter

Don’t worry if your unit tests go to the DB, that might not be so bad.

When I started writing unit tests, I did not know what these were. I read the definition, and strived to follow the recommandations :

  • they should be independent from each other
  • they should not access the DB
  • they should not use the network
  • they should only cover a small scope of your code

I started to write unit tests on my own and became test infected pretty fast. Once I got convinced of the benefits of unit testing, I tried to spread the practice around me. I used to explain to people that it is very important to write real unit tests by the book. Otherwise, Bad Things would happen …

How I changed my mind

A few years ago, I spent a few years working on a Rails side project called I was using a small test gem to enforce that no unit tests were accessing the db. I had to write a lot of mocks around the code. I ended up hating mocks : they are too painful to maintain and provide a false sense of security. I’m not alone in this camp, check DHH’s keynote at RailsConf 2014.

At some point, the mock pain got so bad that I stopped all developments until I found another way. I found a pretty simple workaround : use in-memory SQLite. I got rid of all the DB access mocks. Not only were the tests easier to write and maintain, but they were as fast as before, and they covered more code.

That changed something fundamental in my understanding of testing

It’s all about speed baby

The only thing that makes unit tests so important is that they run fast.

Unit tests as described in the literature run fast. Let’s see what happens when you remove one of the recommandations for unit tests.

  • If tests depend on each other, their outcome will change with the execution order. This wastes our time in analyzing the results. On top of that, independent unit tests are easy to run in parallel, providing an extra speedup. We lose this potential when our tests are dependent.
  • Tests that rely on an out-of-process DB run slower. Tests need to start the DB before anything else. Data needs to be setup and cleaned at every test. Accessing the DB implies using the network, which takes time as well. There’s also a risk of making the tests dependent by sharing the same DB. A last issue is troubleshooting the DB process when things don’t work.
  • Tests that use the network are slow too ! First, Network is slower than memory. Second, data serialization between processes is slow as well. Finally, these tests are likely to use some form of sleep or polling, which is slow, fragile, or both !
  • Finally, there is always a scope past which a test will be too slow.

This means that not only unit tests are fast, but also that fast tests usually show the features of unit tests.

My guess is that ‘unit tests’ were explicitly defined as a recipe for fast tests ! If you stick to the definition of unit tests, you’ll get fast tests and all their benefits.

A speedometer

Fast tests

That also means that we should focus first on having fast tests rather than unit tests. Here is my real check to know if tests are fast enough :

  • Is the build (and the tests and everything) less than 10 minutes ?
  • Can I continuously run my tests while coding and stay in the flow ?

If both answers are yes, then I won’t question myself too much whether my tests are unit, integration or end to end.

So what ?

I’ve been experimenting with these heuristics for some time. Side projects are great for experimenting since you don’t have a team to convince ! Here are my main takeaways :

  • Stick to end to end tests at the beginning of your project. They are easy to refactor to finer grained tests later on.
  • In-memory DBs are great to speed tests up without wasting your time with mocking. We can use a unique DB for every test to keep them independent.
  • Large scope tests are not an issue provided 2 things.
    1. The code contains very few side effects.
    2. It provides good exceptions and assertions messages

On the other side, there are things that I still recommend :

  • Independent tests are easy to write from the beginning, difficult to fix later on. As they save a lot of headaches in diagnostic, I stick to them from the start.
  • Avoid network, it makes the tests slow, fragile and tricky to diagnostic. But please, read this before jumping to mocks.

These rules have served me well, particularly in my side projects, where I don’t have a lot of time. What about you ? Do you have your own testing rules ?

10 Things to Know That Will Make You Great at Refactoring Legacy Code

We write tons of legacy code everyday. Experienced developers understand that legacy code is not something special. Legacy code is our daily bread and butter.

Should we abandon all hope as we enter legacy code ? Would that be professional ? In the end, code is only a bunch of bytes, somewhere on a drive. We are the software professionals. We need to deal with that.

Keep Calm and Take The Power Back

1. Master non legacy refactoring first

Please calm down before this “Bring ‘em out” energy goes to your head.

I did not say that refactoring legacy code is easy. Legacy code can bite … bad. I’ve been in teams which literally spent nights fixing a bad refactoring gone to production …

Before you can refactor legacy code, you need to be good at refactoring new code. We all learned to swim in the shallow pool, it’s the same with refactoring. Mastering green code refactoring will help you when tackling legacy code.

First, you’ll know the ideal you’d like to get to. Knowing how productive a fast feedback loop is will motivate you to keep on refactoring.

Second, you’ll have a better idea of the baby steps to take you through a tricky refactoring.

If you are not yet at ease with greenfield refactoring, have a look at my previous post.

2. Understand that refactoring legacy code is different

The next thing to remember is that refactoring legacy code is different. Let’s assume Michael Feather’s definition of legacy code : “Code without tests”. Getting rid of legacy code means adding automated tests.

Unfortunately, trying to force push unit tests in legacy code usually results in a mess. It introduces lot’s of artificial mocks in a meaningless design. It also creates brittle and unmaintainable tests. More harm than good. This might be an intermediate step, but it is usually not the quickest way to master your legacy code beast.

Here are alternatives I prefer.

3. Divide and conquer

This is the most straightforward way to deal with legacy code. It’s an iterative process to repeat until you get things under control. Here is how it goes :

(1) Rely on the tests you have, (2) to refactor enough, (3) to test sub-parts in isolation. (4) Repeat until you are happy with the speed of the feedback loop.

Depending on the initial state of your tests, this might take more or less time. Your first tests might even be manual. This is the bulldozer of refactoring. Very effective, but slow.


4. Pair or mob program

Given enough eyeballs, all bugs are shallow.

Linus’s Law

Changing legacy code is a lot easier when you team up. First, it creates a motivating “we’re all in this together” mindset. Second, it guards us against silly mistakes.

Mob programming, might seem very expensive, so let me explain why it is not. Suppose you want to introduce some tests in a tricky section of code.

With mob programming, all the team gathers for half a day to work on this change. Together, they find and avoid most of the pitfalls. They commit a high quality change, which creates only one bug down the road.

Let’s see the alternative.

Using solo programming, a poor programmer tries to tackle the change all by himself. He spends a few days to understand and double check all the traps he can think of. Finally, he commits his change, which results in many bugs later on. Every time a bug pops up, it interrupts someone to fix it ASAP.

The savings in interruptions are greater than up front cost of mob or pair programming. 

5. Seams

A software seam is a place where you can alter behavior in your program without editing in that place.

Michael Feathers

This is one of the many interesting things I learned from Michael’s book about legacy code.

Cover of Working Effectively with Legacy Code

Object polymorphism is only one kind of seam. Depending on your language, many other types of seams can be available. 

  • Type seam for generic languages
  • Static link seam for static libraries
  • Dynamic link seam for dynamic libraries

Finding seams in your program is something opportunistic. Keep in mind though that testing through seams is not the end goal. It is only a step to bootstrap the test-refactor loop and start your refactoring journey.

6. Mikado Method

How do you get to your end then ? How to you refactor only what’s useful for your features ? How do you do large refactorings in baby steps ?

Over time, I found that the mikado method is a good answer to all these issues. The goal of the Mikado Method is to build a graph of dependent refactoring. It can then use it to perform all these refactorings one by one. Here is the mikado method by the book.

Before anything else, you’ll need a large sheet of paper to draw the graph. Then repeat the following :

  1. try to do the change you want
  2. If it builds and the tests pass, great, commit and you’re done
  3. Otherwise, add a node for the change you wanted to do in your mikado graph
  4. Write down the compilation and test errors 
  5. Revert your change
  6. Recurse from 1 for every compilation or test error
  7. Draw a dependency arrow from the nodes of errors to the node of your initial change

Once you built the full graph, tackle the refactorings from the leaves. As leafs have no dependencies, it should be easy to do and commit them.

A Sample Mikado Graph

When I first read about the mikado method, it seemed very simple and powerful. Things got more complex when I tried to apply it. For example, the fact that some changes don’t compile hide future test failures. That means that very often, the “Build the graph” and “Walk the graph” phases overlap. In real life, the graph evolves and changes over time. 

My advice about the Mikado Method is not to take it to the letter. It’s a fantastic communication tool. It helps not to get lost and to avoid a refactoring tunnel. It also helps to tackle refactoring as a team.

It is not a strict algorithm though. Build and tests are not the only way to build the graph. Very often, a bit of thinking and expert knowledge are the best tools at hand.

Cover of The Mikado Method book

7. Bubble Context

Refactoring needs to be opportunistic. Sometimes there are shortcuts in your refactoring path.

If you have access to a domain expert, the Bubble Context will cut the amount of refactoring to do. It’s also an occasion to get rid of all the features that are in your software but that are not required anymore. 

The Bubble Context originated from the DDD community, as a way to grow a domain in an existing code base. It goes like that :

  1. Find a domain expert
  2. (Re)write clean code for a very tiny sub domain
  3. Protect it from the outside with an anticorruption layer
  4. Grow it little by little

I have friends who are fans of the bubble context. It is super effective provided you have a domain expert. It is a method of choice in complex domain software.

8. Strangler

Bubble Context works great when refactoring domain specific code, what about the rest ? I had good results with the Strangler pattern.

For example, we had to refactor a rather complex parser for an internal DSL. It was very difficult to incrementally change the old parser, so we started to build a new one aside. It would try to parse, but delegate to the old one when it failed. Little by little, the new parser was handling more and more of the grammar. When it supported all the inputs, we removed the old one.

The strangler is particularly well suited for refactoring technical components. They have more stable interfaces and can be very difficult to change incrementally.

9. Parallel Run

This is more of a trick than a long term strategy. The idea is to use the initial (legacy) version of the code as a reference for your refactoring. Run both and check that they are doing the same thing.

Parallel Railroads

Here are some variations around this idea.

If the code you want to refactor is side effect free, it should be easy to duplicate it before refactoring. This enables running both to check that they compute the same thing.

Put this in a unit test to bootstrap a test-refactor loop. You can also run both in production and log any difference. You’ll need access to production logs … Devops teams have a refactoring advantage !

Here is another use of your logs. If the code writes a lot of logs, we can use them as a reference. Capture the logs of the old version, and unit test that the refactored version prints the same logs out. That’s an unmaintainable test, but good enough to bootstrap the test-refactor loop.

The Gilded Rose kata is a good exercise to practice this last technique.

10. Dead code is better off dead

You don’t need to refactor dead code ! Again, access to production logs is a great advantage for refactoring.

Add logs to learn how the real code runs. If it’s never called, then delete it. If it’s only called with some set of values, simplify it.

No silver bullet

That was a whirlwind tour of the legacy code refactoring techniques I know. It’s no promise that refactoring will become easy or fast. I hope it is a good starting point to set up and walk a refactoring plan.

This was the last post of a series of 3 about how to learn refactoring techniques. If you didn’t already, check part 1 7 Reasons Why Learning Refactoring Techniques Will Improve Your Life as a Software Engineer and part 2 How to Start Learning the Tao of Incremental Code Refactoring Today.

How to Start Learning the Tao of Incremental Code Refactoring Today

In my last post, I explained why incremental refactoring techniques will make you both more productive and relaxed.

As anything worth its salt, the path to full mastery is long and requires dedication. The good news is that you’ll start to feel the benefits long before you are a master.

Dedicated Practice

The quickest thing that will get you up to speed is dedicated practice. Take some time to do some exercices outside of any ‘production’ code.

TDD Coding Katas

The most famous practice to learn TDD also works very well to learn refactoring. That shouldn’t be a surprise as incremental refactoring is an integral part of TDD.

There are many ways to do your first coding kata. You could find a coding dojo near you (ask Or you could find motivated colleagues to start one at your company … I wrote in more details about how to attend a coding dojo in this post.

Emily Bache's Coding Dojo book cover

You can also practice katas on your own. My friend Thomas Pierrain rehearses the same katas to discover deeper insights.

Refactoring Golf

The goal of incremental refactoring is to keep the code production ready all the time. Smaller commits is one happy consequence of that.

You can stretch your refactoring muscles by doing coding katas and keeping the code compiling all the time. You’ll need to master your IDE and its automated refactoring. Most of all, it will shift your attention from the goal to the path !

I learned at SPA conference that we call this ‘Refactoring golf’. The name comes from Golf contests, popular in the Perl community. Their goal is to write the shortest program possible to do a specific goal. The goal of a Refactoring Golf is to go from code A to code B in the fewest transformations possible.

They are a few refactoring golf repos on Github, I tried one and found it fun ! Give it a try too !

Study some theory

Real mastery does not come by practice alone. Studying theory alongside practice yields deeper insights. Theory enables to put your practice into perspective and to find ways to improve it. It saves you from getting stuck in bad habits. It also saves you from having to rediscover everything by yourself.

Develop your design taste

In Taste for Makers Paul Graham explains why taste for is fundamental to programming. Taste is what allows you to judge if code is nice or bad in a few seconds. Taste is subjective, intuitive and fast, unlike rules which are objective but slower. Expert designers use taste to pinpoint issues and good points in code on the spot.

Within the fast TDD – Refactoring loop, taste is the tool of choice to drive the design. Guess what : we can all improve our design taste !

Code smells are the first things to read about to improve your design taste. Once you know them well enough, it will be possible to spot things that might need refactoring as you code.

Spotting problems is nice, but finding solutions is better ! Design Patterns are just that … There has been a lot of controversy around Design Patterns. If overusing them leads to bloated code, using them to fix strong smells makes a lot of sense. There is even a book about the subject :

Joshua Kerievsky's Refactoring To Patterns book cover

Finally, there’s a third and most common way to improve our design taste. It’s to read code ! The more code we read, the better our brain becomes at picking small clues about what nice and what is not. It’s important to read clean code but also bad code. To read code in different languages. Code built on different frameworks. 

So, read code at work, read code in books, read code in open source libraries, good code, legacy code …

Learn your refactorings

As with most topics in programming there is a reference book about refactoring. It’s Martin Fowlers’s Refactoring, improving the design of existing code. Everything is in there, smells, unit testing and a repository of refactoring walkthroughs.

Martin Fowler's refactoring book cover

The book is said to be a difficult read, but the content is worth gold. If you have the grit, give it a try ! At the end, you should understand how your IDE does automated refactoring. You should also be able to perform all the refactorings that your IDE does not provide by hand ! This will enlarge your refactoring toolbox, and help you to drive larger refactorings from A to B.

Develop a refactoring attitude

Practice makes perfect. Whatever our refactoring skill, there is something to learn by practicing more.

Make it a challenge

As you are coding, whenever you find a refactoring to do to your code, make it a challenge to perform it in baby steps. Try to keep the code compiling and the tests green as much as possible.

When things go wrong, revert instead of pushing forward. Stop and think, try to find a different path.

If you are pairing, challenge your pair to find a safer track.

This might delay you a bit at first, but you’ll also be able to submit many times per day. You’ll see that your refactoring muscles will grow fast. You should see clear progress in only 1 or 2 weeks.

Team up against long refactorings

If your team prioritizes a user story that will need some re-design, try to agree on a refactoring plan. The idea is to find a coarse grain path that will allow you to commit and deliver many times. This plan might also help you to share the work on the story.

Having to question and explain your assumptions will speed up your learning. 

Legacy code

Refactoring is most useful with bad legacy code. Unfortunately, it also where it is the most difficult. Next week’s blog post will be about what we can do to learn how to refactor legacy code.

That was my second post in this mini-series about refactoring. First one was 7 Reasons Why Learning Refactoring Techniques Will Improve Your Life as a Software Engineer. The third and last is 10 things to know that will make you great at refactoring legacy code

7 Reasons Why Learning Refactoring Techniques Will Improve Your Life as a Software Engineer

This post is a bold promise. Mastering incremental refactoring techniques makes our lives as software engineers more enjoyable.

I have already made the same statement about TDD before. As refactoring is a part of TDD, one could think I am repeating myself. At the same time, a recent Microsoft blog post argued that refactoring is more important than TDD. Even though I’m a TDD fan, that’s an interesting point.

Incremental refactoring is key to make releases non-events ! As early as 2006, using XP, we were releasing mission critical software without bugs ! We would deliver a new version of our software to a horde of angry traders and go to the movies without a sweat !

What’s so special about incremental refactoring ?

Avoid the tunnel effect

A long tunnel

Mastering incremental refactoring techniques allows to break a feature down to baby steps. Not only smaller commits, but also smaller releases ! You can deploy and validate every step in production before we move to the next !

Small releases are also a lot easier to fix than big bang deployments. That alone is a good enough reason to deploy in baby steps.

There are a lot of other advantages to small deployments. Merges become straightforward. Someone can take over your work if you get sick. Finally, it’s also easier to switch to another urgent task if you need to.

Deliver early

When you know that you will be able to improve your work later on, it becomes possible to stick to what’s needed now. After spending some time working on a feature, it might turn out that you delivered enough value. Between enhancing this feature and starting another one, pick the most valuable. Don’t be able to switch. Incremental refactoring, makes it easy to resume later on if it makes sense.

Real productivity is not measured through code, but through feature value. This explains why incremental refactoring is more productive than up-front / big-bang development.

Know where you stand

As you’ll work through your feature, you’ll have to keep track of the done and remaining steps. As you go through this todo list and deliver every successive step, you get a pretty clear idea of where you stand. You’ll know that you’ve done 3 out of 7 steps for example. It helps everyone to know what’s the remaining work and when you’ll be able to work on something else.

Tangled wool

A few times, I fell in the trap of features that should have taken a few hours and that lingered for days. I remember how stupid I was feeling every morning, explaining to my colleagues that it was more complex than I had thought, but that it should be finished before tomorrow … Learning incremental refactoring techniques saved me from these situations.

Deliver unexpected feature

Incremental refactoring techniques improves the code. As a systematic team wide effort, it keeps the code healthy and evolutive. When someone requests an unexpected feature late, you’ll be able to deliver it.

This should improve your relation with product people. They will be very happy when you build their latest idea without a full redesign.

Avoids rewrites

Joel Spolsky wrote a long time ago that rewriting a large piece of software is the number 1 thing not to do ! All my experiences in rewriting systems have been painful and stressful.

It always starts very rosy. Everyone is feeling very productive with the latest tools and technologies. Unfortunately, it takes a lot of features to replace the existing system. As always with software, the time estimates for the rewrite are completely wrong. As a result, everyone starts grumbling about why this rewrite is taking so long. The fact that the legacy system is still evolving does not help either. Long story short, the greenfield project ends up cutting corners and taking technical debt pretty fast … fueling the infamous vicious circle again.

Incremental refactoring techniques offer an alternative. It enables to change and improve the architecture of the legacy system. It looks longer, but it’s always less risky. And looking back, it’s almost always faster as well !

Ease pair programming

eXtreme Programming contains a set of practices that reinforce each other. As I wrote at the beginning, refactoring goes hand in hand with TDD. Pair programming is another practice of XP.

Tangled wool

TDD and Refactoring simplify pair programming. When a pair is doing incremental refactoring, they only need to discuss and agree on the design at hand. They know that however the design needs to evolve in the long term, they’ll be able to refactor it. It’s a lot easier to pair program if you don’t have to agree on all the details of the long term design …

In turn, pair programming fosters collective code ownership. Collective code ownership increases the truck factor. Which reduces the project risks and makes the team’s productivity more stable. In the long run, this makes the work experience more sustainable and less stressful.

Simplify remote work

Refactoring will also save you from the commutes and allow you to work closer to the ones you love !

Refactoring techniques enable small commits. Small commits simplify code reviews, which are key to remote or distributed work. Even if you are doing remote pair programming, small commits help to switch the control between buddies more often.

Tangled wool

To be continued

I hope that by now I persuaded you to learn incremental refactoring techniques. My next post will dig into the details about how to do that.

5 SPA Conference Takeaways That Could Make Us Better Software Professionals

Last week, my colleague Ahmad Atwi and I went to the London SPA Conference to give our Remote eXtreme Practice talk.

The London eXtreme Programming is one of the most active in the world. You could feel an XP atmosphere at the conference. For example, people like Nat Pryce and Steve Freeman, authors of GOOSGT book were speakers.

The cover of Growing Object-Oriented Software, Guided By Tests

To summarize, we had the chance to attend a lot of very interesting sessions during the 3 days of the conference. Here are 5 pearls of wisdom I took back with me.

What connascences are

Identifying code connascences helps to rank refactorings and keep the system maintainable.

Continuous refactoring is one of the core practices of XP. For me, knowing what to refactor next has been a matter of code smells, discussing with my pair and gut feeling.

A connascence is a coupling between parts of the system. Two parts of your code are connascent if changing one implies changing the other. For example, a function call is connascent by name with the function definition. If you change one, you need to change the other.

Connascences are more formal than code smells. We can detect and rank them to pick the most important refactoring to do. People have listed 9 types of connascences. Some are visible in the source code, others are dynamic and difficult to spot before runtime.

The lowest form of connascence is ‘of name’, like in the function call example above. The worst form is ‘of Identity’, when different parts of the system must reference the same object.

The higher the connascence, the more difficult it is to evolve the parts involved. Instead of relying on intuition, you can use a connascence based refactoring algorithm :

  1. Detect the highest connascence
  2. Reduce or remove it
  3. Repeat.

Thanks Kevin Rutherford and Adrian Mowat for your Red Green then what ? session about connascence.

Tips for pairing with junior developers

Irina Tsyganok and Nat Pryce gave a very fun session about this topic. A lot of valuable points discussed, from which I saved a few pearls of wisdom.

It was reassuring to hear Nat saying that “As we gain experience, we are not expected to know everything”. Pairing with developers out of college is an occasion to “exchange” skills. Hard learned design skills versus updates on the latest technologies.

I also learned about the Expert’s Amnesia and why experts often have a hard time teaching. Expert level knowledge is by nature instinctive. At this level of skill, it becomes very difficult to detail the logic of things that seem obvious.

We engineers are more mentors than coaches

In the first XP book, there were only 3 roles in the team : team members, on site customer and XP coach. The XP coach should be a developer who can help the team to learn the practices and principles of XP.

About the same time, the personal or professional coach jobs appeared. The Scrum Master, is to Scrum what the XP coach is to XP, without the developer part. Remember the joke “Scrum is like XP, without everything that makes it work” (Flaccid Scrum).

It looks like the Agile Coach job title appeared out of all this. The problem is no one exactly knows what this is. Should he be an experienced developer like the XP coach ? A great people person ? Or someone good at introducing change ? or a mix of these ?

Portia Tung and Helen Lisowski ’s  talk “The power of coaching” clarified that.

There is no knowledge transfer from the coach to the coachees ! On the other side, a mentor does transfer knowledge to his mentees. The coach helps his coachee take a step back and take decisions in full consciousness. The goal of the mentor is to inspire and train to new techniques.

I’m fine being a mentor and not a coach ;–)

Servant leaders need to be tough at times

We hear a lot about servant leadership nowadays. Scrum Master should be servant leaders, as well as managers in agile organizations.

Angie Main gave a very interesting session about servant leadership. She made an interesting point I had not heard about before. We all know that servant leaders should trust the team to get the work done most of the time. In spite of that, servant leaders must also be ready to step in and remove people who don’t fit in and endanger the team !

This reminded me of what Jim Collins says in Built to last : “People who don’t fit are expelled like viruses !”

The cover of Built to Last

1/3000 ideas succeeds

Thanks to Ozlem Yuce’s session, I learned about the “Job To Be Done” technique to understand the customer’s real needs.

Studies measured that only 1 idea out of 3000 ends up as a successful product ! Here seems to be the original research.

I’ll remember this fact next time I’m ask for a funky feature !

To conclude

At the end, we had a very good time at SPAconference. The talks were insightful, we had interesting discussions, the premises were comfortable and on top of that, food was great !

I’m already eager to go to SPA conference 2018 !

Don’t Stick to TDD’s Red-Green-Refactor Loop to the Letter

As long as you are writing your tests before your code and doing regular refactoring, you are doing TDD !

The Red – Green – Refactor loop is useful to introduce TDD to new developers. Different loops can be more effective in real world situation.

The Red – Green – Refactor loop is not a dogma !

The famous red, green, refactor TDD loop

Refactor – Red – Green

When I work on a story, I very often keep a TODO list next to my desk. I use it to keep track of the next steps, the edge cases to test, the code smells and refactorings to do.

When I get to the end of the story, all that remains of this list is a few refactorings. Very often, I don’t do them !

With the feature working, doing these refactorings feels like violation of YAGNI. Next time we’ll have to work on this part of the code, we’ll have a story to serve as guide to which refactorings to do.

The same thing is effective at the unit test scale. It’s easier to refactor when you know the test you want to add. Refactor to make this test easy to write !

Here is an example with Fizz Buzz

static int fizzBuzz(int number) {
   return number;

@Test public void
it_is_1_for_1() {

@Test public void
it_is_2_for_2() {

Here is the test I’d like to add. 

@Test public void
it_is_Fizz_for_3() {

Unfortunately, fizzBuzz needs to return a String instead of an integer for it to compile. That’s when I would refactor before adding the new test.

static String fizzBuzz(int number) {
   return Integer.toString(number);

@Test public void
it_is_1_for_1() {

@Test public void
it_is_2_for_2() {

In the end, this loop is very like the classic TDD loop :


A bit more YAGNI, that’s all.

Red – Better Red – Green – Refactor

A few weeks ago, I wrote about error messages in unit tests. To summarize, extra work on error messages reduces the testing feedback loop.

We can translate this focus on error messages into an extra TDD step. Whatever the TDD loop you are using, you can add this step after the Red step.

Red – Green – Refactor – Red – Green

Sometimes, it makes sense to refactor before fixing the test. The idea is to rely on the existing tests to prepare the code to fix the new test in one line.

Let’s take our Fizz Buzz example again. Imagine we finished the kata, when we decide to tweak the rules and try Fizz Buzz Bang. We should now print Bang on multiples of 7.

Here is our starting point :

static String fizzBuzz(int number) {
   if (multipleOf(number, 3)) {
      return "Fizz";
   if (multipleOf(number, 5)) {
      return "Buzz";
   if (multipleOf(number, 3*5)) {
      return "FizzBuzz";
   return Integer.toString(number);


@Test public void
it_is_Bang_for_7() {

I could go through all the hoops, 7, 14, then 37, 57 and finally 357 … By now, I should know the music though !

What I would do in this case is :

  • first to comment the new failing test to get back to green
  • refactor the code to prepare for the new code
  • uncomment the failing test
  • fix it

In our example, here is the refactoring I would do

static String fizzBuzz(int number) {
   String result = "";
   result += multipleWord(number, 3, "Fizz");
   result += multipleWord(number, 5, "Buzz");
   if (result.isEmpty()) {
      result = Integer.toString(number);
   return result;

private static String multipleWord(int number, int multiple, String word) {
   if (multipleOf(number, multiple)) {
      return word;
   return "";


//@Test public void
//it_is_Bang_for_7() {
//   assertThat(fizzBuzz(7)).isEqualTo("Bang");

From there, fixing the test is dead simple.

In practice I find this loop very useful. At local scale as we saw but it’s also a great way to refactor your architecture at larger scale.

One downsize is that if you are not careful, it might lead to over-engineering. Be warned, keep an eye on that !

Last caveat : not all TDD interviewers like this technique …

Don’t obsess

It’s not because you are not following the Red Green Refactor loop to the letter that you are not doing TDD.

An interesting point is that these variations to the TDD loop are combinable ! Experienced TDD practitioners can jump from one to the other without even noticing.

This paper argues that as long as you write the tests along (before or after) the code, you get the same benefit. That’s not going to make me stop writing my tests first, but it is interesting. That would mean that even a Code – Test – Refactor loop would be ok if it is fast enough !

13 Tricks for Successful Side Projects

As I said last week, I released the v0.1 of Philou’s Planning Poker, my latest side project. Although I have a day job, a wife, a family and a mortgage to pay, I still manage to finish my side projects. In the past 7 years, I published 5 of these as open source projects, website, or wannabe businesses.

Side projects rely on 2 things : time and motivation. If motivation goes down, you’ll  stop working on it, and it will die. If you don’t manage to find enough time for it, it will also die.

Over the years, I accumulated best practices that increase the chances of success. Here is a shortlist of 13 of these.

A comic strip about side projects

1. Know your goal

As I said before, side projects are time constrained. If you try to follow many goals at once, you’ll spread too thin and won’t deliver anything. That will kill your motivation.

To avoid this, you need to decide on a unique goal for your project. It can be anything : learning a new tech, building a tool, sell a simple product, maintain a blog.

Depending on the nature of your goal, your side project can take different forms. 20 hours experiments are great for learning new techs. As a side note, MOOCs can also be very effective for this. If you want to start a business, start a lean startup concierge MVP. Finally, if you already know users who need a tool, build a minimalistic version for them.

2. Time box your work

Time boxing will force you to make the choices that will keep you going forward. The risk is to take on too many topics : more refactoring, more UI polish, more options, more bells and whistles. All these can be very interesting and valuable, but are usually not the main priority.

20 hours programs are time boxes, that’s one of the reasons they work. For other kinds of side projects, I do a quarterly prioritization. “This is what I’d like to have in 3 months”. I often slip a bit, but that’s not a problem as long as I stay focused on my goal.

3. Setup a routine

You’ll need to dedicate time to your side project. Think of what you could do if you worked one hour per day to it. 365 hours per year, or 90 hours per quarter ! That’s 2 full weeks of work !

In the long run, having a routine is more effective than anything else. After a few weeks of sticking to a routine, it will become part of your daily life, and won’t be an effort anymore. It will also help to forecast what you’ll be able to do in the coming month or so.

To setup a routine, block a slot in your day to work on your project, and stick to it. My own routine is waking up early to have some focused time. I have entrepreneur friends who did the same. GrassHopper founder says the same in this Indiehacker podcast.

4. Keep delivering to sustain motivation

Nothing kills motivation as not delivering. At work, I can go on without user feedback for a while (not too long though). Unfortunately, that does not work on a time constrained side project. We have only one life and we don’t want to spend our time on things that don’t matter. Things that don’t deliver don’t matter …

To get the technical aspect of delivery out of the picture once and for all, I use Continuous Delivery. Continuous Delivery is pretty easy to start with on a new project :

  • automate all your tests
  • setup a CI server
  • deploy when the CI passes

Once this is up and running, as long as I split my work in baby steps, I’ll be delivering.

The cover of the continuous delivery book

5. Use SasS tools

Setting up a CI and a deployment server can take some time. In 2017 though, online platforms make this very easy. Use as many as you can.

For Philou’s Planning Poker, I save my code on Github, test through Travis CI and deploy to Heroku. I also use Code Climate for static code analysis.

Most of these tools have some free plans for small or open source projects. That alone is a great advantage of making your project open source !

6. Pay for good tools

If you don’t want to make your project open source, consider paying for these services. How much you value your time will tell you whether to buy or not.

There are other things you should pay for as well. I definitely recommend paying for a good laptop and IDE.

Remember, anything that helps you to deliver also helps you to keep your motivation high. You have a day job that earns you money, so use it !

7. Pick a productive language 

Depending on your project, you’ll have a choice in which programming language to use.

Paul Graham advices to use dynamic languages. I tend to do the same, especially after watching “The Unreasonable Effectiveness of Dynamic Typing for Practical Programs”.

A presentation about dynamic typing

In the end, I guess it’s a matter of personal preference. Pick the language you’ll be the most productive with.

8. Use a popular platform

Use a popular open source platform to build your side project on. Useless to say, if your goal is to learn X, use X, even if it is not popular !

There are many advantages to using a popular platform :

  • you’ll have something that has already been production proofed
  • you’ll suffer less bugs (remember Linus’s Law “Given enough eyeballs, all bugs are shallow”)
  • you’ll get help from the community
  • you’ll find compatible libraries to solve some of your problems

The end goal is always the same : sustain your motivation by delivering fast enough !

9. Walk the edge

We don’t start side project to spend time updating dependencies. The saying goes “If it hurts, do it more often”. To save your productivity and motivation, always keep your dependencies up to date.

This is easy with automated test and continuous integration in place. I use no version constraint, but update all dependencies at least every week. I  commit if all tests pass. Sometimes I fall into small 5 to 10 minutes fixes, but that’s all it takes.

10. Take technical debt

When starting a new side project, you have no ideas how long it will last. Could be one week, for example if you started a 20h experiment at the beginning of holidays. Could also be 20 years, if you managed to transform this side project into a full fledged business.

Starting with all the practices that make large software systems manageable will fail. You won’t deliver fast enough. By now, you know the story, if you don’t deliver, you’ll lose your motivation.

I used TODO comments in my latest side project to keep track of the shortcut I took. I found it had 2 main advantages in my situation :

  • I had a quick view of how much total technical debt I took
  • if things get more serious, it will be easy to find improvement points

I know that TODO comments are controversial in the software community. In the context of new side projects though, they make a lot of sense.

My advice is to take technical debt !

11. Use your day job

I’m not saying to use time from your day job to work on your side project. That would be like stealing. Your day job can help your side project in many other ways.

One I already mentioned is using your income to buy better tools.

If you have Slack Time at your day job, you could use it to start a side project that benefits your company. You’ll need to make sure that this kind of arrangement does not pose any IP issues. It can result in a win-win situation.

Another way is to find subjects at work which will grow some skills that are also useful for your side project.

12. Talk about it

Talking about your side project serves many purposes :

  • it’s an unofficial engagement to work on it
  • it provides feedback
  • it could attract early users

To summarize, the more you’ll talk about it the more it will become ‘real’. You can share your side project anywhere : blog, Meetups, work, with friends or family. Depending on your topic, some places will work better than others.

Don’t be afraid that one might steal your idea. A side project is small, not yet rocket science. It’s usually too small to be on the radar of serious businesses, and too big for individuals.

Let me explain that. Very few people have the grit to turn their ideas into something real. If you encounter someone who has the grit and the interest, ask her to join forces !

13. Find real users

Deploying your software is nice, but it’s useless until you have users. Find some ! It’s never too early to find testers. If your first demo does not embarrass you, it was too late ! At the beginning, it can be as basic as walking through an unfinished feature to get feedback.

Real user feedback always results in both high motivation and value. There are many places to get beta users : at work, through friends … have the courage to ask !

That’s again a case for building your system in baby steps. The faster you get to something you can show, the faster you can have beta users.

Do it !

If I needed a 14th best practice it would be to start today ! As with most things, just do it !

Just Do It !

Side Projects Matter

As a manager, you could benefit a lot from helping your developers with their side projects.

I finished my latest side project. That’s the fifth serious one I bring to an end :

  1. 2010-2014, an improved UI for online groceries. This was both a technical project and a wannabe business
  2. Since 2011, this blog
  3. 2015, a custom magnet shop for agile team boards. This was a lean startup style business project
  4. 2016 complexity-asserts a unit test matcher to enforce algorithm complexity. This was a technical project time boxed to 20h.
  5. 2016-2017 Philou’s Planning Poker, a technical product, that I built to solve my own problem.

The more I do side projects, the more I am certain of their value to my employer.

Hand drawing with stating 'creative business idea'

Reasons companies discourage side projects

Unfortunately, most companies discourage their employees to have side projects. It boils down to fundamental fears :

  • they might get less done
  • they might leave

While these are legitimate, most of the time, they are also unlikely or short sighted.

Why don’t they work extra hours ?

Said another way : if developers want to code, why don’t they add new features to the company’s products ?

From my own experience, having a side project has always been an energy booster. Side projects have made me more effective at work !

For a compulsive hacker, a side project is a hobby ! As painting, piano or soccer is to others. Working on smaller software, being in full control, renews the joy of programming.

They’ll quit once they’ve learned new skills !

Simply said, if a company’s retention strategy is to deprecate its developers … It’s got problems a lot worse than a few people doing side projects at night !

They won’t be as productive !

You could think that developers will be less focused on the company’s issues while at work. Indeed, passionate side-project hackers always have it on top of their heads.

Most of the time though, the extra energy provided by the side project out-weights this focus loss.

In the end, we should trust people to be professional. Let’s deal with the problem later, when someone actually starts to underdeliver.

They’ll leave if it turns into a successful product !

Building a product company is pretty damn hard. A time starved side project is pretty unlikely to turn into a successful business. Not much to worry about here ! If it happens, the company is lucky to have had such a productive employee.

They might steal our intellectual property !

This one is true. Only a very small minority of people might do that, but the risk remains.

You might conclude that it’s easier to play it safe and prohibit side projects … at the same time, it’s always sad to punish the majority for a minority’s bad behavior.

It boils down to a tradeoff between risks and rewards. How sensible the company is to IP theft vs the benefits of having a side-project friendly policy.

If you are wondering what these benefits are, read on !

Side projects made me a more valuable employee

As developers, side project teach us a lot. What is less obvious, is how these new skills benefit our employers !

Keep up with technology

A side project is an occasion to work on any subject, with any technology we want. That’s the perfect time to try that latest JS framework we cannot use at work.

This will help us and our companies to transition to these new technologies in the future.

Experimenting different platforms also widens our horizons. It teaches us new ways of addressing problems in our daily stack. For examples, learning LISP macros pushed me to use lambdas to create new control structures in Java, C++ or C#.

The conclusion is that side projects make people more productive and adaptive. Which in turn makes companies more productive and resilient

Understand what technical debt is

A technical debt iceberg

The bottleneck in a side project is always time. In this context, to deliver fast enough to keep my motivation high, I tend to take technical debt. Particularly because I ignore how long I’ll be maintaining this code.

Even so, if I later decide to stick to this side project, this technical debt will be an issue.

That’s what technical debt is : a conscious choice to cut a corner and fix it later. Without keeping track of the cut corners, it’s not debt anymore, but crappy code ! That’s why I ended up using #TODO comments in my side projects.

Later down the road, at any moment, I can decide to invest in refactoring some technical debt out. We can apply the exact same principles at our day jobs.

Understanding what a business is

Trying to make money from your side project taught me other kind of lessons. To sell my product or service, I had to learn a ton of other skills than the typical developer has. Nothing will sell without marketing or sales. I also had to dip my toes in design, web content creation and project management.

Once I went through this, I was able to better understand a big picture at work. It became easier to discuss with product, project and sales people. I’m able to make better tradeoffs between engineering, product and technical debt. Non technical colleagues appreciate that. As developers, it increases our value and trustworthiness.

Discover new ways of doing things

While progressing towards my own side project goals, I had to search the internet for help on some tasks. I ended up using SaaS tools, and discovered alternate practices to the ones I was using in my daily job.

That’s great for employers ! Developers will gain perspective about which company processes work well and which don’t. If you have some form of continuous improvement in place at work, they’ll suggest great ideas ! If you don’t, then you should start doing retrospectives now !

Companies should sponsor side projects

I hope I convinced you that side projects are at least as efficient as a formal training. The topics are unknown at the beginning, but that’s the trick : they deal with the unknown unknowns !

There are many ways a company can help its employees with their side projects :

  • Slack time is a great way to spark the interest in a topic. Developers might start something in their slack time, and continue as a side project. Provided the topic as value for the company, they could continue using their Slack time on it.
  • Hosting a Startup Weekend or a Hackathon. Most company offices are empty on Saturdays and Sundays. You could ask your company to lend its premises for such an event. It’s very likely that some employees will take part.
  • Even better, some companies, like Spotify, organize regular Hackathons on office hours ! That’s Slack time, on steroids !
  • Sponsoring internal communities can enable employees with side-projects to help each other. Sponsorship could be free lunch, premises or a regular small slice of time on work hours.
  • Providing a clear legal framework around side projects reduces the risks for everyone. Questions like the ownership of intellectual property are better dealt with upfront.

A photo of Spotify's open space during a Hackathon

If you are a developer looking for a side project idea, suggest slack time in retrospective ! You could also ask for sponsorship and organize a startup week-end or a lunch time community.

Finally, if your company is side-project friendly, communicate about it ! It’s a great selling point and it will attract great programmers.

Speed Up the TDD Feedback Loop With Better Assertion Messages

There is a rather widespread TDD practice to have a single assertion per test. The goal is to have faster feedback loop while coding. When a test fails, it can be for a single reason, making the diagnostic faster.

The same goes with the test names. When a test fails, a readable test name in the report simplifies the diagnostic. Some testing frameworks allow the use of plain strings as test names. In others, people use underscores#Multiple-word_identifiers) instead of CamelCase in test names.

RubyMine test report

A 4th step in TDD: Fail, Fail better, Pass, Refactor

First, make it fail

Everyone knows that Test Driven Development starts by making the test fail. Let me illustrate why.

A few years ago, I was working on a C# project. We were using TDD and NUnit. At some point, while working on a story, I forgot to make my latest test fail. I wrote some code to pass this test, I ran the tests, and they were green. When I was almost done, I tried to plug all the parts together, but nothing was working. I had to start the debugger to understand what was going wrong. At first, I could not understand why the exact thing I had unit tested earlier was now broken. After more investigation I discovered that I had forgotten to make my test public. NUnit only runs public tests …

If I had made sure my test was failing, I would have spotted straightaway that it was not ran.

Then make it fail … better !

I lived the same kind of story with wrong failures many times. The test fails, but for a bad reason. I move on to implement the code to fix it … but it still does not pass ! Only then do I check the error message and discover the real thing to fix. Again, it’s a transgression to baby steps and to the YAGNI principle. If the tests is small, that might not be too much of an issue. But it can be if the test is big, or if the real fix deprecates all the premature work.

Strive for explicit error message

The idea is to make sure to have good enough error messages before moving on to the “pass” step.

Cover of GOOSGT

There’s nothing groundbreaking about this practice. It’s not a step as explicit as the other 3 steps of TDD. The first place I read about this idea was in Growing Object Oriented Software Guided By Tests.

How to improve your messages

Readable code

Some test frameworks print out the failed assertion code to the test failure report. Others, especially in dynamic languages, use the assertion code itself to deduce an error message. If your test code is readable enough, your error messages might be as well !

For example, with Ruby RSpec testing framework :

it "must have an ending" do
  expect( @daltons)).to be_valid

Yield the following error :

expected #<Vote ...> to be valid, but got errors: Ending can't be blank

Pass in a message argument

Sometimes, readable code is not enough to provide good messages. All testing frameworks I know provide some way to pass in a custom error message. That’s often a cheap and straightforward way to clarify your test reports.

  it "should not render anything" do
    expect(response.code).to eq(HTTP::Status::OK.to_s),
                             "expected the post to succeed, but got http status #{response.code}"


expected the post to succeed, but got http status 204

Define your own matchers

The drawback with explicit error message is that they harm code readability. If this becomes too much of an issue, one last solution is the use of test matchers. A test matcher is a class encapsulating assertion code. The test framework provides a fluent api to bind a matcher with the actual and expected values. Almost all test framework support some flavor of these. If not, or if you want more, there are libraries that do :

  • AssertJ is a fluent assertion library for Java. You can easily extend it with your own assertions (ie. matchers)
  • NFluent is the same thing for .Net.

As an example, in a past side project, I defined an include_all rspec matcher that verifies that many elements are present in a collection. It can be used that way :

expect(items).to include_all(["Tomatoes", "Bananas", "Potatoes"])

It yields error messages like

["Bananas", "Potatoes"] are missing

A custom matcher is more work, but it provides both readable code and clean error messages.

Other good points of matchers

Like any of these 3 tactics, matchers provide better error messages. Explicit error messages, in turn, speed up the diagnostic on regression. In the end, faster diagnostic means easier maintenance.

But there’s more awesomness in custom test matchers !

Adaptive error messages

In a custom matcher, you have to write code to generate the error message. This means we can add logic there ! It’s an opportunity to build more detailed error messages.

This can be particularly useful when testing recursive (tree-like) structures. A few years ago, I wrote an rspec matcher library called xpath-specs. It checks html views for the presence of recursive XPath. Instead of printing

Could not find //table[@id="grades"]//span[text()='Joe'] in ...

It will print

Could find //table[@id="grades"] but not //table[@id="grades"]//span[text()='Joe'] in ...

(BTW, I’m still wondering if testing views this way is a good idea …)

Test code reuse

One of the purpose of custom test matchers is to be reusable. That’s a good place to factorize assertion code. It is both more readable and more organized than extracting an assertion method.

Better coverage

I noticed that custom matcher have a psychological effect on test coverage ! A matcher is a place to share assertion code. Adding thorough assertions seems legitimate, contrary to repeating them inline.

Avoids mocking

We often resort to mocks instead of side effect tests because it’s a lot shorter. A custom matcher encapsulates the assertion code. It makes it OK to use a few assertions to test for side effects, which is usually preferable to mocking.

For example, here is a matcher that checks that our remote service API received the correct calls, without doing any mocking.

RSpec::Matchers.define :have_received_order do |cart, credentials|
  match do |api|
    not api.nil? and
    api.login == and
    api.password == credentials.password and
    cart.lines.all? do |cart_line|

  failure_message do |api|
    "expected #{api.inspect} to have received order #{cart.inspect} from #{credentials}"

Care about error messages

Providing good error messages is a small effort compared to unit testing in general. At the same time, it speeds up the feedback loop, both while coding and during later maintenance. Imagine how easier it would be to analyze and fix regressions if they all had clear error messages !

Spread the word ! Leave comments in code reviews, demo the practice to your pair buddy. Prepare a team coding dojo about custom assertion matchers. Discuss the issue in a retro !

'Just Do It' written on a board

20 Bad Excuses for Not Writing Unit Tests

I guess we always find excuses to keep on with our bad habits, don’t we ? Stephen King

  1. I don’t have the time. But you’ll have the time to fix the bugs …
  2. I don’t know how to write tests. No problem, anyone can learn.
  3. I’m sure the code is working now. The competent programmer is fully aware of the limited size of his own skull …
  4. This code is not testable. Learn or refactor.
  5. It’s (UI|DB) code, we don’t test it. Because it never crashes ?
  6. Because I need to refactor first … and I need tests to refactor ! Damn, you’ve fallen into the test deadlock !
  7. It’s multithreaded code, it’s impossible to test. Because it’s fully tederministic ?
  8. The QA department is already testing the code. Is that working well ?
  9. I should not test my own code, I’ll be biased. Start testing other people’s code right now then !
  10. I’m a programmer, not a tester. Professional programmers write tests.

A quote 'Be Stronger Than Your Excuses'

  1. I’m using a REPL, it replaces unit tests. Sure, and you’re running your REPL buffers on the CI ? and keeping your them for the next time someone modifies your code.
  2. My type system is strong enough to replace tests. Does it detect when you use ‘+’ instead of ‘*’ ?
  3. We don’t have the tooling to write unit tests. Get one.
  4. Tests aren’t run automatically anyway. Install a Continuous Integration Server.
  5. I’m domain expert developer, writing tests is not my job. Creating bugs isn’t either !
  6. We’d rather switch to the Blub language first ! You’re right, let’s do neither then !
  7. We don’t test legacy code. Specifically because it is legacy code.
  8. Adding tests for every production code we write is insane ! As shipping untested code is unprofessional.
  9. I find more issues doing manual testing. Exploratory Testing is a valuable testing, even more so on top of automated tests.
  10. Because my teammates don’t run them. Time for a retrospective.

'Just Do It' written on a board