The Holy Code Antipattern

As I’ve encountered this situation in different disguise in different companies, I now assume it’s a widely applied antipattern.

Context

A team of programmers inherits a piece of code from one of their bosses. They find it really difficult to maintain : it is difficult to understand, fix, and change.

The Antipattern

As this piece of code seems too complex to be maintained by a team of simple programmers, as the boss, just forbid them :

  • to refactor any part of it
  • to rewrite it from scratch
  • to use something else instead

Consequences

  • This often limits the number of bugs that appear in this library, but …
  • It slows down development, because of the micro management required to enforce this pattern
  • It frustrates programmers, and it is likely that the best ones will leave
  • It prevents better design
  • Even worse, in the long run, it prevents great domain driven design to emerge from merciless refactoring
  • In the end, it makes the whole organization less performant

Examples

  • Your boss wrote something a few years ago, if the domain is more or less complex, the resulting code is complicated. The subject eventually got the reputation of being ‘touchy’. Your boss is the only person who effectively manages to change anything in there. He’s a bit afraid that by trying to improve it, the whole thing might just break down and become a bug nest. So, now that he has some authority, he forbids anyone to touch it. If a change is finally required, he’ll micro manage it !

  • Your big boss spent some over time on writing an uber-meta-generic-engine to solve the universe and everything. After seeing many developpers fixing the same kind of bugs over and over, he decides that it’s time to remove the dust from his compiler and that he starts building something that should solve the root cause of all these. In the spirit of the second system effect, he adds all bells and whistle to his beloved project, trying to incorporate a solution to every different issue he has seen during the last decade. This code grows and grows in total isolation of any real working software. When he eventually thinks it is ready, he justs drops the whole thing to your team, that is now responsible to integrate and use this thing in the running system. He’s micro managing the whole thing, and you don’t have any choice but to comply and succeed. This usually generates gazillions of bugs, makes projects really late and ruins the developpers’ lives.

Alternatives

  • Use collective code ownership so that knowledge about the code is shared by design
  • Trust programmers to design and architecture the system
  • Use constant refactoring to let tailor made domain driven designs emerge from the system

RIP mes-courses.fr

Rest In Peace mes-courses.fr. Here is what it looked like :

I wanted to create a really fast on-line grocery front-end, where people could shop for the week in only 5 minutes. It supported shopping recipes instead of individual items but I also envisioned to allow automatic menus recommendations, and automatic item preference selection. I started 4 years ago, and this is my last doing on the subject :). If you’re thinking about starting your own side project, this post is for you.

Here are the lessons I learned

  • As a professional programmer, I largely underestimated the non programming time required for a serious side project. It represents more than half the time you’ll spend on your project (marketing, discussing with people, mock ups and prototypes)
  • When I started, I kind of estimated the time it would take me to build a first prototype. Again I ridiculously underestimated this :
    • because of the previous point
    • because on a side project, you’ll be on your own to handle any infra issue
    • because you don’t have 8 hours per day to spend to your project (As a professional developer, dad of 2, I only managed to get 10 to 15 hours per week)
  • A small project does not require as much infrastructure as a big one. I lost some time doing things as I do when working on projects with more than 100K lines of code. So next time :
    • I’ll stick to end to end cucumber tests for as long as possible
    • I’ll use an economical framework like described in Donald G. Reinertsen’s Flow book to prioritize improvements vs features
  • Eventually, what killed me was that I could not go around the “experiment –> adapt” loop fast enough. The project was just too big for my time
    • I’ll try to select a subject a project that suits my constraints of time and money
    • This will be one of the first hypotheses that I’m willing to verify
    • Web marketing and HTML design are more important than coding skills to run experiments : I’m learning both
  • Scrapping is a time hog. I won’t start any side project involving scrapping anymore.
  • Using on-line services always saved me a lot of time. They are a lot more reliable than anything I could setup. Mainly, this was :
    • Mailing services
    • Cloud deployment
  • Go the standard way. Again, anytime I did something a bit weird, it turned out to cost me some time
    • Use standard open source software, stick to the latest version
    • Use standard and wide spread technology
  • Automated testing and deployment saved me some time from the start. Especially with the small amount of time that I could spend on my project, it was really easy to forget details and to make mistakes.
    • Here is the Heroku deployment script I used to test and deploy in a single shell call
    • And here is a Heroku workaround to run some cron tasks weekly, this allowed me to run some scrapping tests every week on Heroku
  • It took all my time ! Next time I start a side project, I’ll be prepared to
    • Spend all my free time on it (my time was divided between day-job, family, side project)
    • Spend all my learning time (books, on-line trainings …) for it
    • Choose something that I am passionate about !
    • Choose a different kind of project to fit my constraints
      • Joining an existing open source project would let me focus on technical work at my own pace
      • Volunteer for a not for profit project might be less time intensive while allowing some fulfilment
  • I did my project alone, and it was hard to keep my motivation high on the long run. Next time :
    • I’ll join someone else
    • I’ll time box my project before a pivot or starting something completely different
  • I did not manage to get anything done before I settled a regular daily rhythm. I used to work from 5:30am to 7:30am, I first tried in the evening, but after a day’s work, I was too tired to be really productive.
  • When I could afford it, paying for things or services really saved me some time. I’m thinking of
    • A fast computer
    • Some paying on-line services

It is sure that doing a side project seriously is a heavy time investment, but there’s also a lot of benefits !

Here is what I gained

  • Real experience with new technologies. For me, this included
    • Ruby on Rails
    • Coffeescript
    • HTML scrapping
    • Dev-ops practices with Heroku
    • Web design with HTML and CSS
  • I also learned a lot of non technical skills in which I was completely inexperienced
    • Web marketing
    • Blogging
    • Mailing
  • Trying to bootstrap a for profit side project is like running a micro company, it’s a good opportunity to understand how a company is ran. This can help you to become a better professional during your day-job.
  • Having control on everything is a good situation to use Lean techniques.
  • Failing allowed me to actually understand Lean Start up ! The ideas are easy to understand in theory, the practice is a very different thing. It should help me for my next project.
  • Resolving real problems on my own was a very good source for valuable blog articles.
  • I collaborated with very clever people on open source libraries
    • By fixing some bugs in some libraries I was using
    • By releasing some parts of my code as open source libraries

Next time, I hope I’ll get more euros as well !

You’ve got nothing to loose from trying ! Just do it. Give yourself 1 year to get some small success, and then continue or repeat with something else !

Enabling Agile Practices and Elephant Taming

Everybody knows about the agile software development promise “Regularly and continuously deliver value”. This is how it is supposed to work :

  • Iterative
  • Focusing on what is needed now
  • Release as soon as possible
  • Planning small stories according to the team’s velocity

It all seems common sense and simple. Especialy for people who don’t code. That’s not the whole story though, let’s have a look at a few variations :

Suppose a team uses Scrum but does not do any automated testing. As soon as the software will be used, bugs will create havoc in the planning. The velocity will quickly fall, within a few monthes, the team won’t be able to add any value. Surely, things could be improved with some rewrite and upfront design … this does not sound like Scrum anymore.

Now let’s suppose that another team is also using Scrum, uses automated tests, but missunderstood Sprint and KISS for quick-and-dirty-coding. Hopefully, this team won’t get too many bugs in production ! Unfortunately, any change to the source code will trigger hundreds of test failures : again, the velocity will decrease. I’ve been in such projects, in about 2 years, the team got really slow, and might eventually drop their test suit …

These two examples show that automated testing improves the situation, but also that it is not enough ! There are quite a few agile practices that are in fact enabling practices. These are the practices that are required for the process to accomplish the agile promise described at the begining of this article. Most come from eXtreme Programming and have been reincarnated through Software Craftsmanship. That’s what Kent Beck meant when he said that XP practices reinforce each other. Here are a few examples :

For example let’s take coding standards and pair programming which really seem to be a programmer choice. It turns out that they help to achieve collective code ownership. Which in turn helps to get ‘switchable’ team members. Which helps to make good team estimates. Which is required to have have a reliable velocity. Which is a must have to regularly deliver value on commitment !

It turns out that all of the other original XP practices help to achieve the agile promise.

After a lot of time spent writing software, I now tend to think of the code as the elephant in the room. It directly or indirectly constrains every decision that is make. Recognize and tame your elephant or you’ll get carted away …

… or dragged away …

… or trampled …

Cucumber_tricks Gem : My Favorite Gherkin and Cucumber Tricks

I just compiled my Gherkin and Cucumber goodies into a gem. It’s called cucumber_tricks and the source code can be found on github. It’s also tested on travis and documented in details on relish.

The goal of all these tricks is to be able to write more natural english scenarios. Here is an extract from the readme of the gem, which explains what it can do :

Use pronouns to reference previously introduced items

foo.feature

1
2
Given the tool 'screwdriver'
When this tool is used

steps.rb

1
2
3
4
5
A_TOOL = NameOrPronounTransform('tool', 'hammer')

Given /^(#{A_TOOL})$/ do |tool|
  ...
end

Use the same step implementation to handle an inline arg as a 1-cell table

steps.rb

1
2
3
4
GivenEither /^the dog named "(.*)"$)$/,
            /^the following dogs$/ do |dogs_table|
  ...
end

foo.feature

1
2
3
4
5
6
Given the dog "Rolphy"
...
Given the following dogs
  | Rex  |
  | King |
  | Volt |

Add default values to the hashes of a table

foo.feature

1
2
3
4
Given the following dogs
  | names | color |
  | Rex   | white |
  | King  | Sand  |

steps.rb

1
2
3
4
5
6
7
8
9
Given /^the following dogs$$/ do |dogs|
  hashes = dogs.hashes_with_defaults('names', 'tail' => 'wagging', 'smell' => 'not nice')

#  hashes.each do |hash|
#    expect(hash['smell']).to eq('not nice')
#  end

  ...
end

Define named lists from a table

foo.feature

1
2
3
Given the following dishes
  | Spaghetti Bolognaise | => | Spaghetti | Bolognaise sauce |       |         |
  | Burger               | => | Bread     | Meat             | Salad | Ketchup |

steps.rb

1
2
3
4
5
6
7
Given /^the following dishes$$/ do |dishes|
  name_2_dishes = dishes.hash_2_lists

#  expect(name_2_dishes['Burger']).to eq(['Bread','Meat','Salad','Ketchup'])

  ...
end

Visit relish for more detailed documentation.

My New Gem for Creating Rspec Proxies

I already wrote a lot about test proxies (here, here and here).

I just took the time to transform my previous gist in a full fledged ruby gem. It’s called “rspecproxies” and it can be found on github. It’s fully tested, documented and there’s a usage section in the readme to help anyone get started.

Here are the pain points proxies try to fix :

  • Without mocks, it is sometimes just awfully painfull to write the test (do you really want to start a background task just to get a completion ratio ?)
  • With classic stubs, you sometimes have to stub things you are not interested in in your test, you end up with unmaintainable extra long stub setup

Let’s have a look at a few examples of tests with proxies :

  • Verify actual load count without interfering in any behaviour
1
2
3
4
5
6
7
8
it 'caches users' do
  users = User.capture_results_from(:load)

  controller.login('joe', 'secret')
  controller.login('joe', 'secret')

  expect(users).to have_exactly(1).items
end
  • Use proxies to stub an object that does not yet exist
1
2
3
4
5
6
7
it 'rounds the completion ratio' do
   RenderingTask.proxy_chain(:load, :completion_ratio) {|s| s.and_return(0.2523) }

   renderingController.show

   expect(response).to include('25%')
end

I’d really love to see more code tested with proxies, it makes the whole testing so much more natural. As with any testing techniques, we get more thorough testing from the ease of writing the test.

Better Error Messages When Testing Html Views

When testing html views, either from RSpec or from Cucumber, XPath can be really helpful to quickly find expected elements.

Unfortunately, a bit like regular expressions, when you start to use xpath to solve a problem, you often end up with 2 problems … Part of the reason is that xpaths tend to be cryptic. In the case of testing, error messages coming from unmatched xpath are even more crytic !

That’s why I had the idea for xpath-specs : a small gem that allows to associate a description with an xpath, to nest xpaths together, all this to simplify tests and assertion failure reporting.

For example, with an assertion like this :

1
expect(html).to contain_a(dish_with_name("Grilled Lobster")

Here is the kind of failure message one can get :

1
2
3
4
expected the page to contain a dish that is named Grilled Lobster (//table[@id='dish-panel']//tr[td[contains(.,'#{name}')]])
       it found a dish (//table[@id='dish-panel']//tr) :
          <tr><td>Pizza</td>...</tr>
       but not a dish that is named Grilled Lobster (//table[@id='dish-panel']//tr[td[contains(.,'#{name}')]])

And here is the required setup :

1
2
3
4
5
6
7
8
9
10
11
# spec/support/knows_page_parts.rb

module KnowsPageParts
  def dish
    Xpath::Specs::PagePart.new("a dish", "//table[@id='dish-panel']//tr")
  end

  def dish_with_name(name)
    dish.that("is named #{name}", "[td[contains(.,'#{name}')]]")
  end
end

Have a look at the readme for more details.

Refactoring Trick to Insert a Wrapper

Last week at work, we decided that we needed an Anticorruption Layer between our code and another team’s. They have been using our internal data structures as they needed to, in an ad hoc way. This turned out to be an issue when we want to refactor our code. The goals of this layer are :

  • to provide an explicit API layer, controlling what is accessible from the outside
  • to allow us to improve our implementation independently of this API

The first step all the team aggreed on is to provide direct wrappers around our classes. Unfortunately, some of these classes had more than a thousand references to it and our IDE does not provide any automated refactoring for this (introduce a wrapper class, and only use it in some part of the code). We found a trick ! Here it is :

  1. Make sure you have a clean SCC state
  2. Rename the class to be wrapped (let’s call it Foo) into FooWrapper
  3. From SCC, revert the part of the code where you want to continue using Foo directly
  4. In SCC, revert Foo and FooWrapper
  5. Manually (re)create the FooWrapper class
  6. Create FooWrapper.wrap(x) and FooWrapper.unwrap(x) methods
  7. Fix all the compilation issues (mostly by calling wrap() and unwrap())
  8. Run your tests and fix any remaining points

That saved us a whole lot of time. If your layer contains several classes with references between them, they is an optimal order through which to introduce the wrappers. Any order will work, but some will require more temporary calls to wrap and unwrap (step 7.). At the end, wrap() and unwrap() methods should only be called from within the layer.

Often you’ll find out that to complete the wrapping of a class, you’ll first need to wrap another class, you can :

  • Follow the mikado method strictly : upgrade your mikado graph, revert all your changes, and try to wrap this other class. It can seem slow, but it is completly incremental
  • Wrap the 2 classes at the same time : this is the best way when wrapping this other class is rather straightforward
  • Insert temporary calls to wrap() and unwrap() : they’ll be removed when you’ll later wrap the other class. This might be the only way if the classes have cyclic dependencies.

How We Introduced Efficient Agile Retrospectives

6 months ago, our team started to run systematic iteration retrospectives. Within these 6 months, our team became more agile than ever. Running efficient retrospectives is what makes good teams great, it is what truly makes a team agile. Here is our story.

At the begining, we started with a standard retrospective format inspired from The Art Of Agile Development (A truly great book by the way). As I was used to running retrospectives, I did the first one. This is how it goes :

  • Repeat Norm’s Kerth’s prime directive to everyone (5 minutes) :

Regardless of what we discover today, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.

If the message is awkward, the animator can explain that the point is to avoid counter productive blaming (even if it is unlikely to happen in your setting …)

  • Do some kind of brainstorming (10 minutes) :

One his own, everyone writes what went well on green post-its, and what did not went so well on blue post-its. Everybody sticks them up on the flipchart.

  • Group insights togethers (15 minutes) :

The animator reads every post-it aloud, asking team members for details, and tries to group notes together. When everyone is ok with the groups, dot vote : every team member gets 3 points, that he can assign as he wishes on any group. Here is what the board should look like at this point.

  • Find actions to improve the process (30 minutes) :

Pick up the 3 most voted issues, give everyone 5 minutes to think of useful actions to fix these. Then take another 5 minutes so that people can discuss their solutions in pairs, and another 5 minutes to do the same in groups of 4. Eventually, as the animator reports on the flipchart, do the same altogether. Make sure that the solutions are doable within the next sprint, if not discuss and split them until they are. Again, dot vote for the prefered actions.

  • Do it :

As soon as the retrospective is finished, the animator must enter the actions in the coming sprint backlog. These actions are not user stories, but they must be done. If there are not, the team cannot be agile as they will be no continuous improvement. Don’t assign any story points to these items, but let the velocity auto adjust for a given amount of improvements during every sprint.

After the first one, every team member animated such a retrospective, one after the other. When everyone was at ease with this, we changed the format. We bought Agile Retrospectives: Making Good Teams Great and everyone was responsible for designing his own retrospective session when his turn came. This allows different and various insights.

To conclude, here are a few retrospectives hints and guidelines

  • Everyone’s voice should be equal, everyone should feel free to talk
  • Don’t worry if good ideas are not selected the first time, they’ll come back
  • At the begining of every retrospective, review what happened about what was decided during the previous one
  • Use sticky flipcharts and stick them on the wall as you go through the different activities
  • Prepare flipcharts in advances to make the retrospective run more smoothly
  • Use colored post its to highlight different aspects (good things, bad things or whatever)
  • Use markers to write on post its (how can one read it from the back of the room otherwise ?)

If you haven’t yet, you have no excuses not to start now !

Automatic Travis Daily Build With Heroku Scheduler

As I just released auchandirect-scrAPI, and that it relies on scrapping, I needed a daily build.

The Travis team is already working on this, and I found a small utility app called TravisCron where anyone can register his repo for an automatic build.

Unfortunately, the feature is not yet ready in Travis, and the TravisCron guys did not yet activate my repo. After having a look at the TravisCron source code and the Travis API, I found out that it is really simple to do the same thing on my own.

That’s how I created daily-travis. It’s a tiny Rake task, ready to be pushed and automaticaly scheduled on heroku that will restart the latest build when run.

Details are in the README

@Travis : Thanks again for your service.