Flavors of TDD

During the years doing some coding dojos with the same circle of people, I came up with my own style of practicing TDD. Lately, I had the chance to do a pair programming session with someone I did not know. That made me realize that they are in fact even more ways to practice TDD than I thought.

Mockist vs Classisist

A lot has already been written (and discussed) about these two approaches. I myself have already blogged about the subject, I even gave a talk about it. From my own point of view, I believe that the inconvenients of making mocking the default far outweights the benefits. I’m not saying that mocks aren’t useful from time to time, but rather that they should remain the exception.

Top-Down vs Bottom-Up

That’s the reason why I wrote this post. This is the main difference I found between my style and my pair’s. Let me explain.

Top-Down

Doing TDD top-down means starting with high level end to end tests, implementing the minimum to make it work, refactor and repeat. A bit like BDD, the point is to focus on the expected behavior, and avoid writing useless things. The bad point is that the refactoring part can get pretty difficult. On real life code, strictly following top-down would mean writing a feature test first, passing it with a quick and dirty implementation, to then spend hours trying to refactor all that mess … good luck !

Here is another example, from coding dojos this time. Having had success with the top-down approach during previous dojos, we once intentionally tried to code Conway’s Game of Life using a top-down approach. We did so by writing high level tests that were checking special patterns (gliders …). That was a nightmare ! It felt like trying to reverse engineer the rules of the game from real use cases. It did not bring us anywhere.

Bottom-Up

At the other side of the spectrum, you can do bottom-up TDD. This means unit testing and implementing all the small bricks you think you’ll need to provide the expected overall feature. The idea is to avoid tunnels and to get fast feedback on what you are coding. The bad side is that you might be coding something that will end up being unnecessary. Be careful, if you find yourself spending a lot of time building up utility classes, you might be doing too much bottom-up implementation.

The Numerals to Romans Kata is a good exercise to fail at bottom-up. Every time I did this exercise during a coding dojo, people new to it would start to come up with complicated ways to do it (often involving complex array manipulation). Compared to that, applying disciplined bottom-up TDD brings a brutally effective solution for Numerals to Romans.

Mixed approach

Both approaches have their pros and cons. I really believe developers who are serious about TDD should master both, and learn when to apply each. In fact, as often, the best approach lies somewhere in the middle. Here’s my recipe :

  1. Start with a high level feature test
  2. try to make it pass …
  3. … (usually) fail
  4. rollback or shelve your test and draft implementation
  5. build a brick
  6. unshelve
  7. try to make it pass …
  8. … and so one until the high level test finally passes.

In fact, it’s a lot like the Mikado Method for building features instead of refactoring.

Practice in dojos

It’s possible to intentionally practice this in coding dojos as well. Most kata should be OK, as long as the group agrees to fix it using this particular approach up front.

If during the dojo, you’ve just written a test, suddenly realize that it won’t be easy to get it passing, and that you’ve got the elements spread out in your code, this is the time ! Comment the test, get the green bar, refactor, uncomment the test, try to make it pass, repeat … Eventually, you’ll have all the bricks to make it easy to pass your test.

Some might say this is not ‘pure’ TDD, but that sounds like cargo cult to me ! As long as you make sure you are not building useless stuff, and that you keep the feedback loop as short as possible, you’re on the right track.

How to Use Hackernews and Reddit for Blogging

A few weeks ago, I posted my latest article Is There Any Room for the Not-Passionate Developer ? on Hackernews and Reddit Programming. The post stayed on the fist page for a while, and I got a lot of traffic. If you are yourself blogging, you might be interested to know how it occurred, and what I learned in the process.

How it started

In Soft Skills, the software developer’s life manual John Somnez explains that posting your blog articles on HN or Reddit might bring you a ton of traffic, but that comments can be hard to swallow at time. Within a few hours of writing my blog post it had generated some positive activity on twitter (favorites and retweets) from my regular followers. That’s a good sign that the post is good enough. As I had promised myself in such case, I submitted the post to both HN and Reddit.

What happened ?

I don’t know for sure on Reddit, but I know my post stayed on the first page of HN for a few hours, it even went up to the third place for a while. In the process I got a lot of traffic, a lot more than I am used to. I also got a ton of comments, on HN, Reddit and directly on my post. John Somnez had warned that comments on HN and Reddit can be very harsh, so I went through these quickly, took notes about the points that seemed interesting, but I only responded to comments on my website.

Overall, the comments were pretty interesting though, and brought a lot of valid points. I’m planning to write a ‘response’ article to take all these into perspective.

Most of the traffic was made in the day I submitted my post, but I had more traffic than usual for 2 or 3 days. Since then, the traffic has settled down, but I now get between 2 and 5 times more traffic than I typically had on a daily basis ! An online Taiwanese tech magazine also asked me the permission to translate my post in Chinese !

I’m not sure about the performance of my website during the traffic spike. I’m using Octopress to statically generated html on Github Pages, so that should be fine. I am also using a custom domain, and I need to make sure my DNS is correctly configured for this to perform well.

Advice for bloggers

So here is what I am going to do regarding HN and Reddit in the future :

  1. It can bring so much traffic and backlinks that I’ll definitely continue to submit blog posts from time to time
  2. For the moment, I’ll stick to only submitting the articles from which I already received good feedbacks, I don’t want to get a bad karma or reputation on these websites
  3. I might submit old articles that gathered good reviews at the time I wrote them
  4. Concerning comments, I’ll try to grow an even thicker skin. Maybe at some point I’ll try to answer on HN or Reddit

Of course, depending how this works, I will adapt !

Kudo Boxes for Kids

How do you get your kids to participate with housekeeping ? I guess that’s the dream of all parents. As so, we’ve tried quite a lot of tactics throughout the years. Carrots and stick never really worked, so we tried positive reinforcement, gratitude … Unfortunately, nothing really made any noticeable improvement.

Until now !

At work, we’ve been using Kudo Boxes for a while now. A kudo box is a small mailbox where teammates can drop a word of thank or some praise (No blame allowed here !)

Why not try the same thing at home ? During the summer holidays, we’ve build kudo boxes for everyone in the family.

It’s a nice and easy way to express gratitude for any good stuff our kids do. The great thing is that it’s cheap, it’s easy to carry cards around and to hand one out at any moment.

What happened ?

First, we now have very joyful kudo reading sessions : our kids rush to the boxes to check for new cards. The second most noticeable change we observed is that they are both participating more in the house chores ! For example, as soon as we start cooking, they might spontaneously dress the table up. Or they might bring tools to help us as best as they can when we are tending to the garden.

To summarize, it seems it brought a lot of joy and love in the house.

How we started ?

They are many ways to build a kudo box. The simplest way might be to get an old shoe box, and to cut a hole in the cover. We bought a wooden box with four drawers, and spent some time all together to decorate it. This in itself was already fun.

We started using simple pieces of paper as kudo cards, but I later downloaded and printed a bunch of official kudo cards from the Management 3.0 website. It turns out there is a version in french.

Bonus

An unexpected, but great, side effect is that my spouse and I started to get kudos as well ! It’s really nice to receive a word from your kids. For example, here is a drawing I got from my daughter.

RSpecProxies Now Supports . To Receive(xxx)… Syntax

Pure mocks are dangerous. They let defect go through, give a false sense of security and are difficult to maintain.

I’ve already talked about it before but since then, DHH announced that he was quitting TDD, the Is TDD Dead ? debate took place, and the conclusion is that mockist are dead.

They are still times when mocks feel much simpler than any other things. For example, imagine your process leaks and crashes after 10 hours, the fix is to pass an option to a thirdparty, how would you test this in a fast test ? That’s exactly the kind of situation where using test proxies saves you from mocks. A test proxy defers everything to the real object but also features unintrusive hooks and probes that you can use in your test. If you want a code example, check this commit, where I refactored a rails controller test from mocks to a RSpecProxies (v0.1).

I created RSpecProxies a while ago, a while ago, and it’s syntax made it alien to the RSpec work, it needed an update. RSpec now supports basic proxying with partial stubs, spies, the and_call_original and the and_wrap_original methods. RSpecProxies 1.0 is a collection of hooks built on top of these to make proxying easier, with a syntax that will be familiar to RSpec users.

Before original hook

This hook is triggered before a call a method. Suppose you want to simulate a bad connection :

1
2
3
4
5
6
7
8
9
10
11
it 'can simulate unreliable connection' do
  i = 0
  allow(Resource).to receive(:get).and_before_calling_original { |*args|
    i += 1
    raise RuntimeError.new if i % 3 == 0
  }

  resources = Resource.get_at_least(10)

  expect(resources.size).to eq(10)
end

After original hooks

RSpecProxies provides the same kind of hook after the call :

1
2
3
4
5
6
7
8
it 'can check that the correct data is used (using and_after_calling_original' do
  user = nil
  allow(User).to receive(:load).and_after_calling_original { |result| user = result }

  controller.login('joe', 'secret')

  expect(response).to include(user.created_at.to_s)
end

Here we are capturing the return value to use it later in the test. For this special purpose, RSpecProxies also provides 2 other helpers :

1
2
3
4
5
6
# Store the latest result in @user of self
allow(User).to receive(:load).and_capture_result_into(self, :user)

# Collect all results in the users array
users = []
allow(User).to receive(:load).and_collect_results_into(users)

Proxy chains

RSpec mocks provides the message_chain feature to do build chains of stubs. RSpecProxy provides a very similar proxy chain concept. The main difference is that it creates proxies along the way, and not pure stubs. Pure stubs assume that you are mocking everything, but as our goal is to mock as little as possible, using proxies makes more sense.

When using a mockist approach, the message chain is a bad smell because it makes your tests very brittle by depending on a lot of implementation. In contrast, proxy chains are meant to be used where they are the simplest way to inject what you need, without creating havoc.

For example, suppose you want to display the progress of a very slow background task. You could mock a lot of your objects to have a fast test, of if you wanted to avoid all the bad side effects of mocking, you could run the background task in your test, and have a slow test … Or, you could use a chain of proxies :

1
2
3
4
5
6
7
it 'can override a deep getter' do
  allow(RenderingTask).to proxy_message_chain("load.completion_ratio") { |e| e.and_return(0.2523) }

  controller.show

  expect(response).to include('25%')
end

Here the simplest thing to do is just to override a small getter, because from a functionnal point of view, that’s exactly what we want to test.

Last word

The code is on github, v1.0.0 is on rubygems, it requires Ruby v2.2.5 and RSpec v3.5, the license is MIT, help in any form are welcome !

How to Prepare a New Ruby Env in 3 Minutes Using Docker

One or two weeks ago, I registered to the Paris Ruby Workshop Meetup and needed a Ruby env. I have been using Vagrant quite a lot to isolate my different dev envs from each other and from my main machine. As I’ve been digging more into Docker lately, I thought I’d simply use Docker and Docker Compose instead.

I turned out to be dead simple. All that is needed is a docker-compose.yml file to define the container, record the shared volume and set a bundle path inside it :

1
2
3
4
5
6
7
8
rubybox:
  image: ruby:2.3
  command: bash
  working_dir: /usr/src/app
  environment:
    BUNDLE_PATH: 'vendor/bundle'
  volumes:
    - '.:/usr/src/app'

Without the custom bundle path, bundled gems would be installed elsewhere in the container, and lost at every restart.

To use the Rubybox, just type docker-compose run rubybox and you’ll get a shell from within your ruby machine, where you can do everything you want.

In fact, I found the thing so useful, that I created the Rubybox git repo to simplify cloning and reusing. I’ve already cloned it at least 3 times since then !

1
2
3
git clone git@github.com:philou/rubybox.git
cd rubybox
docker-compose run rubybox

How to Grow a Culture Book

Have you read valve’s Handbook for new employees ?

In Management 3.0 terms, that’s a culture book. It’s a great way to build and crystallize a culture, and it serves as a guide for newcomers, and can later serve as an hiring ad for your team or company.

The good thing about a culture book, is that you don’t have to write it in one go. It’s a living artifact anyway, so you’d better not ! Our current culture book has emerged from a collection of pages in our wiki.

It started as working agreements

The first real contributions to our culture book (though we did not know it at the time) was spending some time in retrospectives to define and review our working and coding conventions.

When we started doing retrospectives, we had to discuss, agree and formalize the decisions we made about our way of working. We usually did a ‘review how we work’ activity at the beginning the retros, spending 10 minutes to make sure we all understood and agreed on our current working conventions. If there was any disagreement or update required, we would discuss them during the retro, and at the end, add, remove or modify items from our agreement page.

It continued as self-organization workshops

After a while, we had built up a pretty extensive set of working and coding conventions. The team had already become quite productive, but to keep the momentum in the long run, we needed to increase self-organization. By reading Management 3.0 books and Management Workout (which has been re-edited as Managing for Happiness) in particular, I found description about how to use a delegation board and delegation pokers to measure and formalize the current delegation level of a team.

We did this, and started a lot of self-organization workshops :

After each of these workshops, we created a wiki page, explaining how we planned to handle the subject in the team.

The book

At that point, we had fairly extensive and formal descriptions of our working practices and conventions. By reading this set of pages, someone would get a pretty accurate grasp of our principles and values.

Wondering how we could write our own culture book, I had an “Aha !” moment and realized that all I had to do was to create a wiki page pointing to all our different agreement pages. This only took 5 minutes.

At the moment, our culture book serves 3 purposes :

  • documentation for the team members
  • guide for newcomers
  • description about how we work for people in the company who might want to move to our team

Next step would be to add a dash of design, a few war stories, export it as a PDF, and use it outside to advertise the team and the company.

When the Boy Scout Rule Fails

Here goes the boy scout rule :

Always check a module in cleaner than when you checked it out.

Unfortunately, this alone does not guarantee to keep the technical debt under control. What can we do then ?

Why the boy scout rule is not enough

I can easily think of a few issues that are not covered by the boy scout rule.

It only deals with local problems

In it’s statement, the boy scout rule is local and does not address large scale design or architecture issues. Applying the boy scout rule keeps files well written, using with clear and understandable code. From a larger perspective though, it does very little or slow improvement to the overall design.

These large scale refactorings are very difficult to deal with using the boy scout rule alone. It could be done but would require to share the refactoring goal with all the team, and then track its progress, while at the same time dealing with all the other subjects of the project. That’s starting to sound like multitasking to me.

It’s skill dependent

Another point about the boy scout rule (and to be fair, about any refactoring technique) is that programmers will be able to clean the code only as much as their skills allow them to !

Imagine what would happen when a new master developer arrives in a team of juniors, he’d spot a lot of technical debt and would suggest improvements and ways to clean the code. Code that was thought of as very clean would suddenly be downgraded to junk !

The point here is that the boy scout rule cannot guarantee that you have no technical debt, because you don’t know how much you have !

That’s where the debt metaphor reaches its limits and flips to some productivity investment. By investing time to perform some newly discovered refactoring, you could get a productivity boost !

The cover of "Domain Driven Design"

Domain-Driven Design: Tackling Complexity in the Heart of Software, Eric Evans calls this knowledge distillation. He means that little by little, the team gains better understanding of the domain, sometimes going through what he calls a ‘breakthrough’. These breakthroughs often promote existing code to technical debt …

It’s context dependent

Developers alone are not the only one responsible for creating technical debt. Changes to the environment also do.

For example, if the market conditions change, and that new expectations for the product are slowly becoming the norm, your old perfectly working system becomes legacy and technical debt. As an example, let’s examine what happened to the capital markets software industry in response to the 2008 crisis.

  • The sector became a lot more regulated
  • Risk control is moving from nightly batches to real time
  • The demand for complex (and risky) contracts decreased
  • As a consequence, trading on simpler contracts exploded

All these elements combined invalidated existing architectures !

New technologies also create technical debt. Think the switch from mainframe to the web.

What do we need then ?

Should we stop using the boy scout rule ? Surely not, it would be a total non-sense. Submitting clean and readable code is a must.

But it is not enough. If you have spotted some large scale refactoring that could bring some improvement, we should do what a fund manager would do :

  1. Estimate the return on investment
  2. If it is good enough, do it now

Obviously, large refactorings should also be split into smaller value adding cost reducing items. But then what ?

The cover of "The Nature of Software Development"

In The Nature of Software Development Ron Jefferies says that we need a unique value-based prioritization strategy for everything, including technical improvements. Once you’ve got that, there’s no sense in splitting and embedding your refactoring in other tasks, this will just increase your work in progress, reducing your throughput and cycle time.

Frankly, I think that’s easier said than done. I can think of two ways :

  • As Ron Jefferies tends to say, have a jelled-cross-functional team discuss and prioritize collectively
  • As Don Reintersen advocates, use an economical framework to estimate the return on investment

At least that’s a starting point !

Is There Any Room for the Not-Passionate Developer ?

In Rework, Basecamp guys David Heinemeier Hansson and Jason Fried advise to “Fire the workaholics”, while in Zero to One Peter Thiel argues that great working conditions (as described within Google for example) result from 10x technological advantages, not the other way round.

Back in 1983, Bill Gates said :

You have to think it’s a fun industry. You’ve got to go home at night and open your mail and find computer magazines or else you’re not going to be on the same wavelength as the people [at Microsoft].

Where do we stand now ? Do you need to live and breath programming to remain a good developer ?

What about the 40h per week rule ?

Studies have repeatedly demonstrated that 40h per week is the most productive work load, but in Outliers, the Story of Success Malcolm Gladwell explains that getting fast to the 10000 hours of practice is a required road to success. As my Aïkido professor says, the more you practice, the better you get …

In Soft Skills: The software developer’s life manual John Somnez also makes the point for hard work, that while he long believed that smart work would be enough, it’s only when he put more in that he managed to drastically change his career.

During an argument, DHH argued in favor of work life balance whereas Jason Calacanis said that working in a startup had to be an all-in activity. In the end, they agreed that what matters is passion.

From my own experience, whenever I work on something I am passionate about :

  • I am more productive
  • I feel energized rather than dulled by the work

When I look around me, all the great developers I know are passionate and putting in more than 40 hours per week in programming. I also noticed that passion and efforts have always been pretty good indicators of future skills.

But then, how do passionate people manage to remain productive when working more than 40 hours per week ?

What about the under the shower idea ?

In Pragmatic Thinking and Learning: Refactor Your Wetware (Pragmatic Programmers) (which is a great book BTW), Andy Hunt explains that our R-mode works in the background, and needs time away from the task at hand to come up with “out of the box” creative solutions.

XP argues for a sustainable pace, but at the same time, Uncle Bob says that we should put in 60 hours (40 for employer, and 20 for yourself) of work per week to become and remain ‘professionals’ (I guess that’s from The Clean Coder if I remember correctly).

On my side, 6 to 8 solid hours of pair-programming on the same subject is the most I can do before becoming a Net Negative Producing Programmer. But I can do more programming per day if I work on a side project at the same time though !

I guess that’s how passionate people do it, they have different topics outside of their main work :

  • they read books about programming
  • they have their own side projects
  • they read articles about programming
  • they might maintain a programming blog
  • they might attend, organize or speak at meetup

Most of the time, this does not make for more work, but rather for more learning. If I’ve noticed that all the great programmers around me are passionate and strive to improve at their craft, I’ve also noticed that overworked workaholics usually aren’t very productive.

Special challenges for mums and dads

I think that Bill Gates 1983 statement still holds. If you are not passionate about programming, you’ll have a hard time remaining and succeeding as a programmer in the long run.

The great thing about all this passion is that we can experience an energized work environment, always bubbling with change and novelty. On the flip side, keeping up with all is not always easy.

As we developers gain more experience, we tend to loose patience with everything that just feels as a pain in the ass, and will want :

  • Powerful languages and technologies
  • An efficient working environment
  • Smart colleagues

Unfortunately, that might also be the moment in your life when you become a parent, and you’ll want a stable income to sustain your family and some time to spend with your kids.

That is when things get tricky. Neither can you jump ship for the next cool and risky startup where you’ll do great things, nor can you find enough time moonlighting to improve your skills … To add pain to injury, even with 10 years of experience in various languages and technologies, most companies won’t look at your resume unless it contains good keywords … It looks like the developer’s version of The Innovator’s Dilemna !

Lack of passion and parenthood might partially explain why people stop being developers after a while. I can quickly think of 2 bad consequences of this :

  • We tend to reinvent the wheel quite a lot (I’m looking at you, .js frameworks …)
  • We might be meta ignoring (ignoring that we ignore) people skills that could make us all more efficient

Chinese translation

How to Setup Rails, Docker, PostgreSQL (and Heroku) for Local Development ?

My current side project is an online tool to do remote planning pokers. I followed my previous tutorial to setup Rails, Docker and Heroku.

Naturally, as a BDD proponent, I tried to install cucumber to write my first scenario.

Here is the result of my first cucumber run :

1
2
3
4
$ docker-compose run shell bundle exec cucumber
rails aborted!
PG::ConnectionBad: could not translate host name "postgres://postgres:@herokuPostgresql:5432/postgres" to address: Name or service not known
...

It turned out that I had taken instructions from a blog article on codeship that mistakenly used host: instead of url: in their config/database.yml

After fixing that in my database.yml file, things where only slightly working better :

1
2
3
4
$ docker-compose run shell bundle exec cucumber
rails aborted!
ActiveRecord::StatementInvalid: PG::ObjectInUse: ERROR:  cannot drop the currently open database
: DROP DATABASE IF EXISTS "postgres"

The thing is the config was still using the same database for all environments. That’s not exactly what I wanted. I updated my config/database.yml :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
default: &default
  adapter: postgresql
  encoding: unicode
  pool: 5
  timeout: 5000
  username: postgres
  port: 5432
  host: herokuPostgresql

development:
  <<: *default
  database: planning_poker_development

test: &test
  <<: *default
  database: planning_poker_test

production:
  <<: *default
  url: <%= ENV['DATABASE_URL'] %>

Victory ! Cucumber is running

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ docker-compose run shell bundle exec cucumber
Using the default profile...
0 scenarios
0 steps
0m0.000s
Run options: --seed 45959

# Running:



Finished in 0.002395s, 0.0000 runs/s, 0.0000 assertions/s.

0 runs, 0 assertions, 0 failures, 0 errors, 0 skips

Fixing rake db:create

By searching through the web, I found that people were having similar issues with rake db:create. I tried to run it and here is what I got :

1
2
3
$ docker-compose run shell bundle exec rake db:create
Database 'postgres' already exists
Database 'planning_poker_test' already exists

Why is it trying to create the postgres database ? It turns out that the DATABASE_URL takes precedence over what is defined in my config/database.yml. I need to unset this variable locally. I already have the docker-compose.override.yml for that :

1
2
3
4
5
6
7
8
9
web:
  environment:
    DATABASE_URL:
  ...

shell:
  environment:
    DATABASE_URL:
  ...

Rake db:create works just fine now :

1
2
3
$ docker-compose run shell bundle exec rake db:create
Database 'planning_poker_development' already exists
Database 'planning_poker_test' already exists

Starting a psql session

During all my trouble-shootings, I tried to connect to the Postgresql server to make sure that the databases where created and ready. Here is how I managed to do that :

1. Install psql client

On my Ubuntu machine, that was a simple sudo apt-get install postgresql-client-9.4.

2. Finding the server port

The port can be found through config/database.yml or through docker ps. Let’s use the later, as we’ll need it to find the server IP as well.

1
2
3
$ docker ps
CONTAINER ID        IMAGE            COMMAND                  CREATED             STATUS              PORTS           NAMES
b58ce42d2b2b        postgres         "/docker-entrypoint.s"   46 hours ago        Up 46 hours         5432/tcp        planningpoker_herokuPostgresql_1

Here the port is clearly 5432.

3. Finding the server IP

Using the container id we got on previous docker ps command, we can use docker inspect to get further details :

1
2
3
4
$ docker inspect b58ce42d2b2b | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.2",
                    "IPAddress": "172.17.0.2",

4. Connecting to the database

Connecting is now just a matter of filling the command line.

1
2
3
4
5
$ psql -U postgres -p 5432 -d planning_poker_development -h 172.17.0.2
planning_poker_development=# select * from schema_migrations;
 version
---------
(0 rows)

5. Installing psql client directly in the shell

It should be possible to install the psql client in the shell container automatically, but I must admit I did not try this yet. It should just a matter of adding this to the Dockerfile

1
RUN apt-get install postgresql-client-<version>