Incremental Architecture, a Cure Against Architecture Astronauts

Back in 2001, when I started to code for a living, fresh out of school, I was mainly doing a form of cowboy coding. After a few months of maintaining my own mess, I started to recall my university lessons : we should be doing design before coding …

When I was asked to re-engineer the ‘wizards UI’, I paused my coding to design something clean from scratch. It worked quite well at first : the overall code was a lot simpler and contained a lot less duplication than before. Seeing this new shiny UI, product people asked for new features. Unfortunately, I hadn’t thought of them when designing this little framework. I was almost back at my initial situation.

That’s how I started to look for another way to design software. At about the same time the eXtreme Programming book fell into my hands. That’s where I discovered the idea of incremental design and architecture.

Front cover of the first edition of the XP book

What is Incremental Archi

Let’s start with the antithesis of incremental architecture :

Astronaut Architecture

The term “Architecture Astronaut” was coined by Joel Spolsky back in 2001. If you haven’t read this classic post yet, I strongly encourage you to do so. Basically, he explains that we should not be impressed by architects going over their heads talking about too abstract stuff.

Incremental is the exact opposite of astronaut architecture

Two Schools to Software Architecture

Traditional architecture is about taking up-front choices that will be difficult to change. Incremental architecture is about preparing for non-stop change and taking decisions as late as possible.

The idea in incremental architecture is really simple : keep your code simple, clean and automatically tested in order to be able to simply adapt your code and architecture when definitely needed.

Pros and Cons of incremental architecture

The first reaction of most software engineers (me included, remember how my story started) is that it can only work on trivial stuff. After practicing it for about a decade, I am now convinced it works most of the time. I’m not alone, James Shore (who wrote the more on the subject) also shares my view:

Common thought is that distributed processing, persistence, internationalization, security, and transaction structure are so complex that you must consider them from the start of your project. I disagree; I’ve dealt with all of them incrementally.

Two issues that remain difficult to change are choice of programming language and platform. I wouldn’t want to make those decisions incrementally!

I would add published APIs to this list.

Granted, there are situations that incremental architecture alone cannot handle, what about its good points then ?

In all the other cases (and that means most of the time), here is what you get :

  • As you won’t need to deal with future use cases, you’ll do less work
  • That in turn, will keep your code simpler, decreasing time to release new features
  • As change is built-in, you’ll be able to improve your architecture in ways you could not have imagined from the start !

Front cover of the Art of Agile Software Development book

If you cannot see how this could possibly work ? Read on !

How to do it

eXtreme Programming

As I said earlier, incremental architecture emerged from eXtreme Programming. It won’t come as a surprise that in order to work well incremental architecture requires the XP practices to be in place. In particular, the code base should be automatically tested, the continuous integration cycle should take less than 10 minutes, the design should be simple. The team should be good at doing refactoring.

Don’t expect to be able to do incremental architecture without these practices in place. But this alone might be enough already !

Front cover of the Martin Fowler's refactoring book

Architecture Vision

At work, where our team consists of 9 developers, it’s not always that simple to coordinate and all pull in the same direction. That’s why we find it useful to share a very long term architecture vision (Enabling Incremental Design and Evolutionary Architecture). This will help people to make coherent decisions when hesitating between 2 alternate designs.

The vision can be the result of the work of a pair, or a mob brainstorming or whatever. Building this vision is typically an activity where experienced programmers can contribute a lot of value.

Once this vision is shared and understood by the team, every time a pair has to work on a story, they can orient the design towards it. But always as little as possible to finish the work at hand, remember the XP motos KISS (Keep It Simple & Stupid) & YAGNI (You Ain’t Gonna Need It.

One final word … a vision is just that : a vision ! It might turn out true, or false, be ready to change it as circumstances change.

Spikes

At times, even with a story in your hands and a long term architecture sketch on the whiteboard, you might have difficulties to know how to change your design to fulfill both.

As always in XP, in case of uncertainty, use Spikes ! Spikes are short time-boxed experiments of throwaway code, which goal is to answer a specific design question.

How to mitigate the risks

What about these topics that don’t yield to incremental architecture ? What if you discover late that you need to change your platform ? Or your API ?

Obviously, you should think about these questions up-front. Hopefully, there are usually not that difficult to answer. But, over time, Non-Functional-Requirements and technologies change. Large and long living systems are particularly likely to need to change to a new platform someday.

Unix had the answer : build your system out of small tools, that do only one thing well, and that communicate through a standard protocol. Systems built that way can be re-written one piece at a time.

Ken Thompson and Dennis Ritchie, the creators of Unix

Photo from WikiMedia

The modern version of this is the micro-services architecture. Incremental architecture allows you to start with a monolith, split it when you need to, and replace micro-services as needed. With the safety of simple code and a great automated test harness. Interestingly, successful software systems that were architectured up-front also take this road … without the safety !

The Architect

Good news : no more PowerPoints and a lot more coding with the team ! Here is what’s expected from an incremental architect :

  • To code with the team. As Bertrand Meyer once said “Bubbles (aka. diagrams) don’t crash”, it’s plain too easy, and wrong, to mandate architecture without living with the consequences
  • To come up with more ideas when drafting the long term vision
  • To keep an eye on the ‘long term’ while being the navigator in pair programming
  • In the second edition of the XP book Kent Beck suggests that the architect should write large scale tests to stress the system and demonstrate architecture issues to the team
  • To delegate as much as possible to the team. However smart the architect, the team as a whole is smarter ! Delegating architecture increases motivation and the quality of the outcome.

End of the story

I’ve been practicing incremental architecture and design for a long time now. It made my life a lot simpler ! Most architecture questions become backlog items to prioritize.

One last advice : be prepared to re-read Joel Spolsky’s article whenever you get caught up in architecture meetings …

How to Get Your Team to Do Code Reviews

As software developers, we very always often get to work in code bases that are not perfect. In this situation we have 3 choices : leave, grumble, or make some changes ! Team wide code reviews are a recognized way to increase the quality of the code.

Unfortunately, installing code reviews as part of the daily work habits of a team can be very challenging. When I joined my team 3 years ago, no one was doing any kind of code reviews. With a small push here and there, I managed to get the team to adhere to a strict 4 eyes principle (full story here).

Here are a few strategies that I have either used or seen that should get your team mates to do code reviews.

Overall principle

Even if you are at the bottom of the org chart, you have far more influence than you would first think. My favorite way of bringing change is to demonstrate a valuable practice :

  • First, you need to be trustworthy
  • Then, do the practice you want to introduce
  • Make sure it is seen as valuable
  • Be ready to forgo the credits of the introduction of the practice
  • Keep on until people start to copy what you are doing

As someone famous said

A man may do an immense deal of good, if he does not care who gets the credit

I won’t go in the details about how to be trustworthy, which could be a post of its own. Basically, putting our customers interests first, speaking the truth and avoiding to appear dogmatic can get us a long way already. The Clean Coder is an excellent read on the subject.

Front cover of the Clean Coder book

Strategies

If you have retrospectives in place

In this case, you already have a place and time dedicated to discussing changes to your working agreements. Expressing your concerns about code quality (or another problem related to code reviews) and suggesting code reviews as a way to fix that problem might get a quick team buy-in.

If you don’t manage to get a definitive buy-in, try to get the team to ‘beta-test’ code reviews for a while. If the experiment demonstrates value, it will convert into a full fledged working agreement.

If you practice collective code ownership

Unfortunately, if you don’t have retrospectives in place, or if you did not manage to get your team to discuss code reviews in retrospectives, yo’ll need to find another way to introduce them.

If you have collective code ownership, it should be ok to comment on your team mates code (if not, jump directly to the next strategy). In this setting, just start to do some code reviews for others ! Make sure your reviews are helpful and ‘nice’.

You’ll need to stick to doing code reviews long enough before people actually start to mimic you. Reserve some time in your daily agenda for code reviews. Your goal is to win over people, so it might be a good idea to start with a selected few at the beginning, preferably people who are more likely to jump in. If asynchronous (tool based) reviews don’t get answered, be ready to fallback to face to face discussions : review on your own, then just ask the author for a few minutes so that you can discuss his change. When you feel someone is interested by your reviews, ask him to review your own code in return.

Remember to always try to get some feedback : ask people what they think of the exercise, keep note of the good points, and adapt to smooth out the rest.

Illustration of a team working collectively

Photo from emotuit

Once you won over your first team mate, involve him in your grand plan to spread the practice, explaining how much you think this could make a difference. As more and more people get convinced, the practice will eventually tacitly become part of your working conventions.

Depending on your context, this might take more or less time. I said it was possible, I never said it would be easy ! Grit, patience and adaptation are key here.

Otherwise

This is the worst starting point, basically, you have nothing yet. The strategy is very similar to the one with collective code ownership, with a different first move.

Instead of providing code reviews to your team mates, start by walking over to them to ask for a face to face code review of your own commits. Use the same tactic as stated before : stick to the same people at first. Once the practice starts to stick within this group, bring in a basic tool to ease up the process.

At some point, you should be asked to review others code, that’s a good sign ! If not, try again with other people.

Continue using the same strategy as with collective code ownership and you should eventually get there !

When it does not seem to stick

There could be many reason why the practice is not adopted. The key for you is to understand why and to adapt your strategy. The reason is often that the perceived value is not big enough, for example :

  • the team is not aware of its problems that reviews would fix : try to make them more visible
  • reviews are seen as too expensive or painful : try better tools or taking more on yourself
  • the team has bigger problems to fix first : spend your energy on these first !
  • reviews just don’t work in your context (ex: your job is to write one time, throw away code) : it’s up to you to stay or leave !

Tools

There are a ton of tools and best practices to run code reviews. It’s important that you know them, so that you know where you are going.

Don’t expect to use the best tools from the start though. At the beginning, your goal is to win over your team mates. In this context, only 2 things matter :

  • It should have almost no adoption curve, so that others start using it
  • It should have almost no maintenance cost, as you don’t want to spend your time doing that

That’s why at the beginning, low tech tools are so great. Spending a month setting up a top notch code review system before the first review won’t work. If your VCS has code reviews built-in, by all means use it ! Otherwise, diff in mails and face to face conversations are a good starting point. You’ll later hook something in your VCS to automatically send mails with commit diffs …

As people gradually get convinced of the value of code reviews, regularly meet and discuss a better setup. This is how you’ll introduce state of the art tools and agree on refinements such as pre or post commit reviews.

Best practices

As a code review champion, it’s very important that you provide great reviews to your team mates. You must become the local expert on the subject ! You don’t want all your efforts to be ruined because one of your reviews has been perceived as aggressive.

A slide from Atlassian presentation about styles of code reviews

There is a ton of resources on the internet about how to perform good code reviews, here are a few :

What’s next ?

Congratulations ! Your team will start to reap the benefits of code reviews. Keep on improving the practice !

To end the story, after a few months of code reviews, during a retrospective, my team (at work) decided to take it one step further and started to do almost full time pair programming ;–)

A Seamless Way to Keep Track of Technical Debt in Your Source Code

I eventually stumbled upon a way to keep track of technical debt in source code that is both straightforward and already built-in most tools : simple TODO comments !

Photo of a screen displaying source code with #TODO comments

How it happened ?

Some time ago, we tried to add @TechnicalDebt annotations in our source code. Unfortunately, after a few month, we came to the logical conclusion that it was too complex to be effective :

  • It involved too much ceremony, which frightened people
  • It made people uneasy to change anything around the annotation instead of sending a call to action
  • As a result, it was always out of date

After a bit of discussion with my colleagues, we decided to replace all these annotations with simple TODO comments.

When the refactoring to do seems fairly obvious (but also premature) we’ll use a straightforward //TODO (example) introduce a factory message. Next time a pairs gets to work on this part of the code, they get the silent opinion of their peers to help them decide what to do about this piece of the code. Other times, the code might be smelly, yet without us knowing what to do about it yet, in this case, we agreed to use //TODO SMELL (example) responsibilities are not clear in this class which is still a TODO comment, but not a clear call to action.

When I started my current side project, I naturally started to use them. They display nicely in CodeClimate.

The pros

Screenshot of the CodeClimate issue dashboard displaying TODO comments

The great thing about TODO comments is that, as a very old programming trick, they are already supported out of the box by most tools IntelliJ, SonarQube, Rails, CodeClimate and I guess many others. Only one day after I refactored to TODO comments, a team mate fixed one that had appeared in his IDE’s TODO tab !

The cons

Some tools, IDEs in particular, tend to assume that you should fix all your TODOs before you commit anything. That’s not exactly how we are using them to track lasting technical debt. So that’s one thing you need to keep in mind.

Tools like Sonar on the other hand, assign a fixed remediation cost to any TODO you have in the code, which is usually not the case at all !

How to set it up in your project

As you might guess, this is pretty easy. Just start adding TODO comments in your code …

Teamwise

It is worth first validating the practice with your colleagues though. There are many ways to do that, depending on your team’s work habits :

  • Use your team Slack (or whatever chat room you use) to share a link to this post (for example) and create a yes/no poll
  • Or if you think you need it, create some wiki page explaining the practice and detailing its rationals in your context, add a yes/no poll, and finally share this page with your team
  • Eventually, if you think that this topic deserves it, setup a meeting with everyone and discuss the point. It might be worth sharing information about the practice beforehand to make the meeting more efficient. You can end the vote with a thumb vote (up : yes, down : no, side : whatever)

Thumbs voting positions

Don’t wait for unanimity to start the practice, majority is enough ! Make sure that people who voted the other way will follow the team practice in the end though. Remember that whatever the answer, discussing team practices is good.

Once all the team agreed on using (or not) TODO comments, mention the practice in your team’s coding conventions or working agreements (which I strongly recommend to have written somewhere). If you don’t have any yet, create some !

Toolswise

Most tools will handle TODO out of the box.

  • Rails comes with a rake notes task to list TODO comments.
  • CodeClimate and SonarQube both lists TODOs as issues in their default config
  • Most IDEs have a ‘TODO’ tab which will display the TODO comments in the project
  • Otherwise, good old grep will very happily find TODO comments in your code

Some tools might require small tweaks to improve the experience :

  • In IntelliJ, in the commit window, uncheck the ‘Check TODO’ checkbox to avoid getting a warning at every commit

IntelliJ's commit window, with its 'Check TODO' check box

  • SonarQube uses the same fixed remediation cost for every TODO comment. It’s up to you to adapt this remediation cost to your context.

What’s next ?

TODO comments are a good starting point to track technical debt. Once you start using them, there are a few things you can do :

First, remember to fix some regularly. Very old TODO comments are technical debt of their own ! Using code quality dashboards like SonarQube or CodeClimate help to continuously improve your code.

If your tools allow it, you might consider setting up a simpler //SMELL ... instead of //TODO SMELL ... or whatever other special comment that might be useful in your context.

Finally, there is a lean continuous improvement practice which consists of logging problems as they occur. Doing this could help your team to decide which technical debt hotspots are the most important to fix. When appropriate, link the problems with the TODO comments. After a few weeks of this, walking through all the problems during a retrospective should shed light on what parts of the code are causing the most troubles.

Edit 2017-04-19

Thanks a lot for your comments ! People have suggested a ton of great improvements over my basic setup :

  • plugins to other tools that also support TODO comments
  • activating automatic sync between issues in CodeClimate and your issue tracking system
  • using custom comments markers
  • adding an ‘X’ to your comment every time you are bothered by the technical debt, tools can configured to assign a higher severity to issues with a lot of ‘X’

How to Mock Your Browser’s Timezone With Jasmine and MomentJS

Last week, I’ve been working at adding a distributed countdown to my Online Planning Poker App. As our team works from Paris and Beirut, I wanted to unit test that it would work well through different timezones. I found a surprisingly simple solution.

What Google told me

I first searched Google to see how to do it. I found 2 answers that looked promising :

Known results for such a simple situation were disappointing !

What I ended up with

After a good deal of dabbling around, I eventually found a pretty simple solution using Jasmine and Moment Timezone :

1
2
3
jasmine.clock().install();
...
jasmine.clock().mockDate(moment.tz("2017-03-23 10:00:00", "Europe/Paris").toDate())

Obviously, the drawback is that it implies setting both the timezone and the time. This should be ok in most of unit tests though, but might be an issue in some cases.

Almost 15 Years of Using Design by Contract

I first read about Design By Contract in 2002, in Object Oriented Software Construction 2. As soon as I read it, I was convinced, today, I still believe it’s a great and fundamental technique. That’s why, I almost never write a contract ! Let me explain.

Phase 1 : DbC ignorance

I started to code professionally in 2001. This was a time where design and quality software meant Rational Rose (a UML design and code generation tool) while I, at the contrary, was just Cow Boy Coding my way out of any problem I was given.

I wasn’t really doing Object Oriented programming, but rather imperative programming, using objects as structs, getters, setters, and classes as a way to organize the code … In this context, my design skills were improving slowly, and I was at the risk of falling in love with a local-optimum practice that would prevent me from growing further.

That’s why I started to read books such as the Gang Of Four Design Patterns, or OOSC2.

Phase 2 : DbC enlightenment

The cover of the Object Oriented Software Construction 2

Reading this book was a profound experience to me. My programming changed fundamentally before and after reading it. The chapter about contracts, taught me what objects are.

One the one hand, Pre and Post conditions can be used in any kind of programming and are just a kind of C assert macro on steroids. Class invariant, on the other hand, is a completely different kind of beast. The invariant of a class is a predicate about an instance of this class that should always be true. For example : field X should never be null, or the value of field N should always be greater than 0.

In some way, grasping the concept of invariant is close to understanding what a class is.

Phase 3 : DbC everywhere

That’s when I started to write contracts everywhere. I was writing C++ code at the time, and my code must have looked something like that :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
class MonkeyWrench
{
    bool _isStarted;
    std::vector<Part>* _movingParts;

protected:

    virtual void invariant() const
    {
        assert(_isStarted == (_movingParts != NULL));
    }

public:

    MonkeyWrench()
    {
        this->_isStarted = false;
        this->_movingParts = NULL;

        invariant();
    }

    bool isStarted() const
    {
        return this->isStarted();
    }

    void start()
    {
        assert(!this->isStarted());
        invariant();

        this->_movingParts = ...

        invariant();
        assert(this->isStarted());
    }

    const std::vector<Part>& movingParts() const
    {
        assert(this->isStarted());
        invariant();

        return *this->_movingParts;
    }
    ...
};

I definitely over-abused contracts, it made the code unreadable. Plus sometimes, I was using excessively long and intricate assertions which made the problem even worse.

Hopefully, overusing contracts also taught me a lot in a short time. Here are some of the lessons I learned :

  • DbC is not very well supported, it’s never something built in the language, and edge cases like inheriting an invariant or conditions can become messy pretty fast.
  • Checking for intricate contracts at every method call can be pretty slow.
  • Checking everything beforehand is not always the simplest thing to do, at times, throwing an exception on failure just does a better job.
  • It happened that removing the contract made the code do just what I wanted. It’s easy to write unnecessary strict contracts.
  • Command Query Separation Principle is great ! Having ‘const’ or ‘pure’ queries that don’t change anything make writing contracts a lot simpler.
  • Preconditions on queries are painful. When possible, returning a sensible ‘null value’ works better, nothing is worse than getting an error when trying to call a const query from the interactive debugger.
  • Finally, the more immutable a class is, the simpler the invariant. With a lot of mutable fields, you might resort to have the invariant check that fields are synchronized as expected. If fields are immutable, this simply vanishes.

Phase 4 : DbC hangover

At the same time I discovered all these small subtleties about contracts, I fell upon Martin Fowler’s book Refactoring, improving the design of existing code and started to use Unit Tests extensively. This lead me to the following conclusions :

  • Tests are more efficient at producing quality software
  • Contracts can be an hindrance when trying to do baby steps refactorings as described in Martin Fowler’s book

On top of that, as DbC is not natively supported by languages, no documentation is generated, meaning that most of the time, the callers still have to look into the code. As a result, I was using contracts less and less often.

Phase 5 : DbC Zen

Looking back, I might not be writing a lot of asserts in my code, but I am still thinking in terms of contracts all the time. In fact, there are a ton of ways to use DbC without writing assertions :

  • Use as much immutability as possible. An immutable class does not need to check its invariant all the time, just throwing from the constructor if arguments are not valid is enough.
  • Use conventions as much as possible, for example, constructor arguments should be set for all the life of the object (cf Growing Object Oriented Software Guided by Tests which describes the different ways to inject something in an object)
  • Looking back at my DbC assertions, most relate to null values. Again conventions work better ! At work, we simply forbid passing null values around. If something can be null, it means it’s optional, Java has an Optional<T> class for just that (I’m pretty sure it is possible to do something even better with C++ templates). In this case, if the contract is broken, NullReferenceException will eventually be our assertion.
  • Replace as many pre & post conditions with invariants on the callee, the arguments or the return objects as possible. It makes sense as it’s just making sure we are using ‘valid’ objects everywhere. Again, if these objects are immutable, it makes the whole thing even simpler !
  • To take further benefit of the invariant of immutable objects, introduce new types. For example, instead of changing an object’s state through a command with associated involved contracts, split the class in 2 and make the method a query returning an immutable object, potentially making the initial class immutable as well. Remember, immutable classes mean almost no assertions !
  • Use your language. Ex, instead of asserting that 2 lists remain of the same length, refactor to a list of pairs ! (I know that’s an obvious example, but you get the point)
  • If you are using a statically typed language, use types ! For example, I remember at one project I worked on, we had an bug involving a duration : somewhere in the code milliseconds got mistaken for seconds … We fixed that by replacing the integer by TimeSpan all over the place. Again, that’s so obvious !
  • Eventually, when all else fails, or when it’s just too much overhead, use the simple asserts provided by your language or common libraries.

To come back at the previous code section, this how it could be written without assertions :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class MovingMonkeyWrench
{
    const std::vector<Part> _parts;

public:
    MovingMonkeyWrench() : _parts(...) {}

    const std::vector<Part>& parts() const
    {
        return this->_parts;
    }
    ...
};

class MonkeyWrench
{
public:
    MovingMonkeyWrench start() const
    {
        return MovingMonkeyWrench();
    }
    ...
};

Details are omitted, but it’s easy to see how shorter the code is.

Conclusion

When applying all the techniques above, you’ll see that cases for explicit assertions are rare. Less assertions also workarounds the issues coming from the poor support for DbC : no documentation and intricate cases.

In the end, assertions made my code more ‘functional’. I’m not alone to have done the same journey, and if you are interested you should read Eric Evans’ DDD book where he presents things like immutable value objects and specification objects.

My Ultimate Jira Personal Kanban

A few years ago, I wrote about how I started to use Jira as my personal Kanban board at work. A lot of things have changed since then, which brought me to update my board and make it even more productive !

The context

During the almost 18 months since I wrote this first post, a lot of things have changed in my daily work (hopefully : I’m not doing the same thing again and again !). Essentially, I got involved in more projects, some of which involve people from all around the company and some of which don’t require any code to be written. For example, I’m now engaged in our Agile Community of Practice, where I sometimes contribute content.

Here are the consequences on my work :

  • I have more tasks to deal with, not necessarily more work, but still more tasks
  • I have more sources of tasks : tasks can come from any of the projects I am involved in
  • I have more tasks depending on other people, and that are in a WAITING state meanwhile

I had to adapt my personal Kanban to this new workload.

The changes

As I explained in the previous description of my Jira Personal Kanban, I am using a custom project and Kanban board to aggregate all my tasks from various projects, in order to see everything in a single unique place. Here are the changes I’ve made since, so if you haven’t yet, it might be a good idea to read that previous version first.

Quick filters

In his post Maker’s Schedule, Manager’s Schedule Paul Graham explained the challenge of having a lot of non-programming work to do everyday for programmers. He then advises to use slots for different activities during the day, in order to keep uninterrupted chunks of time to do creative work. To apply this technique, I reserved ‘Unbookable except for X’ slots in my calendar everyday.

I had previously been using Swim-lanes to track work from different projects. This turned out not to scale very well to more projects : it made the board messy, and I kept being distracted by all these other tasks. I ditched all the Swim-lanes (not exactly, I kept one for urgent issues only). Instead of Swim-lanes for tracking projects, I now use Quick Filters. I created such filters as With Project X and Without Project X. During the day, when I want to focus on Project X, I use quick filters to only show tasks related to it.

Quick filters screen capture

Day markers

I have a daily routine of checking what’s on my plate and deciding what I’d like to achieve during the day (picking the right time to do this is an art in itself). In order to keep track of this, I use special day marker tasks : as ^^^ TODAY ^^^, ^^^ TOMORROW ^^^ and ^^^ THIS WEEK ^^^. This tasks are always in my TODO column, and will never be completed. I move them around to mark what I expect to finish at different time horizon. Ex : everything above ^^^ TODAY ^^^ should be finished before the end of the day.

Again, this helps me to focus on today’s activities, and to do just enough prioritization.

Day marker tasks screen capture

One last thing here, you’ll have noticed the Epic for these special tasks. It’s a way to identify them in JQL queries.

WAITING flag

Quite often, you have tasks waiting for someone else. That’s surely not the most efficient situation, but once you leave the comfort of your focused dev team, handoffs are often the norm (at least until the lean principles spread in every part of the business). Status of waiting tasks is worth checking regularly, but very certainly not many times per day !

Again, leaving them in my board created useless distraction. I have now taken the habit of renaming the tasks I’m waiting for with a [WAITING] ... prefix. On top of that, I created 2 quick filters WAITING and Without WAITING to quickly check and then forget about waiting tasks.

Waiting tasks screen capture

Watching tasks I’m not the assignee of

On some occasions, we might be two of us working on the same task, or I might want to keep an eye on a task to know if something is needed from me. As there is only a single possible assignee in Jira, I changed my global filters to also include tasks with a custom label pbourgau-watch. Any time I want to add a task in my board, I just add this label to it.

Screen capture of a task description I'm not the assignee of

Getting the Lean reports back

In order not to have too many old tasks in my board, I used to filter out old items in the global filter. This did the job, but at the cost of invalidating the lean reports (cumulative flow and control charts). In order to get these back, I removed this constraint from the global filter, and created yet another quick filter Without Old which I almost always keep on.

Control chart screen capture

Scripts

Global Filter

1
2
3
4
5
6
7
project in (POP, POPABTODO, "Development Engineering Program", COPA)
AND type != Epic
AND (Assignee = pbourgau OR
    Co-Assignees in (pbourgau) OR
    mentors in (pbourgau) OR
    labels in (pbourgau-watch))
ORDER BY Rank ASC

Quick Filters

1
2
3
4
5
6
7
8
9
10
11
12
13
14
-- With "Project X" + Day marker tasks (Epic link ...) + tasks containing "BRANDING"
project = "Project X" or "Epic Link" = POPABTODO-410 or summary ~ "BRANDING"

-- Without "Project X"
project != "Project X" and summary  !~ "BRANDING"

-- Without Old
status not in (DONE,CLOSED) OR updated >= -14d

-- WAITING
summary ~ 'WAITING'

-- Without WAITING
summary !~ 'WAITING'

Things that did not change

I still use a WIP limit on the In Progress column, display the Epic in on the cards and special use custom color coding for tasks :

1
2
3
4
5
-- Tasks with an imminent due date become red
duedate <= 1d or priority = "1-Very High"

-- Tasks with a due date are orange
duedate is not EMPTY

The result

Overall, this is how my board looks like :

Full board screen capture

I guess I’m a kind of personal productivity geek … but I believe it’s a skill of utter importance for developers, especially when they get a bit of experience and are not fed ready made tasks to do.

How to Subscribe to an ActionCable Channel on a Specific Page With Custom Data ?

In my spare time, I’m writing a Planning Poker App. As a reminder, planning poker is a group estimation technique designed to eliminate influence bias. Participants keeps their estimates secret until everyone unveils them at the same time (See Wikipedia for more details).

The driving idea behind my app is for team members to connect together and share a view of the current vote happening in their team. Each team has an animator, who is responsible to start new votes. This is the aspect I’ve been working on during the last few days. I want all team members to be notified that a new vote started by displaying a countdown on their page.

I am building the app with Rails 5 but I did not have a clear idea of what technology to use to build this feature. After some googling, I found that ActionCable provides just the kind of broadcasting I am looking for (Have a look at the ActionCable Rails guide for more details).

A Specific Page

The Rails guide is pretty clear, as usual I would say, but all the examples show subscriptions at any page load. As explained above, I only want participants to subscribe to their own team’s votes : until they have joined a team, it is not possible to subscribe to a particular channel.

As my app is currently behaving, once identified, participants get to a specific team page. I wanted to use this page as the starting point to my subscription. After some more googling about page specific JavaScript in Rails, I found this page from Brandon Hilkert that explains how to do this cleanly. The idea is to add the controller and action names to the body tag, and to filter out js code at page load. This is what I ended up doing :

First, I adapted the app layout to keep track of the controller and action names in the HTML body :

1
2
3
4
5
6
7
8
<!-- app/layouts/application.html.erb -->
<!DOCTYPE html>
<html>
  ...
  <body class="<%= controller_name %> <%= action_name %>">
    ...
  </body>
</html>

Then I replaced the default channel subscription with a function :

1
2
3
4
5
6
7
8
# app/assets/javascripts/channels/team.coffee
window.App.Channels ||= {}
window.App.Channels.Team ||= {}

App.Channels.Team.subscribe = ->
  App.cable.subscriptions.create "TeamChannel",
    received: (data) ->
      # Do something with this data

As a reminder, here is what the server side channel would look like :

1
2
3
4
5
class TeamChannel < ApplicationCable::Channel
  def subscribed
    stream_from "team_channel"
  end
end

Finally, I called this subscribe function from some page specific Javascript :

1
2
3
4
5
# app/assets/team_members.coffee
$(document).on "turbolinks:load", ->
  return unless $(".team_members.show").length > 0

  App.Channels.Team.subscribe()

That’s it. By playing around in your browser’s js console, you should be able to test it.

Custom Data

That’s just half of the story. The code above subscribes on a specific page, but it does not specify any particular team channel to subscribe to. This means that all participants would receive notifications from all teams !

In his article about unobtrusive JavaScript in Rails, Brandon Hilkert also suggests using HTML data attributes to pass parameters to the a JavaScript button event handler. There’s no button in our case, but we can still use the same technique. Let’s add data specific attributes to the HTML body.

To subscribe to specific team channel, the plan is to add the team name to the HTML body tag through a data attribute, then to capture and use this team name when subscribing.

Again, let’s enhance the layout :

1
2
3
4
5
6
7
8
<!-- app/layouts/application.html.erb -->
<!DOCTYPE html>
<html>
  ...
  <body class="<%= controller_name %> <%= action_name %>" <%= yield :extra_body_attributes %> >
    ...
  </body>
</html>

I had to adapt my views. In the team members show view (the one doing the subscription), I added an extra data attribute for the team name :

1
2
3
4
<!-- app/views/team_members/show.html.erb -->
<% provide(:extra_body_attributes, raw("data-team-name=\"#{@team.name}\"")) %>

...

With this done, it is possible to capture the team name from the page load event and feed it to the subscribe method :

1
2
3
4
5
# app/assets/team_members.coffee
$(document).on "turbolinks:load", ->
  return unless $(".team_members.show").length > 0

  App.Channels.Team.subscribe($('body').data('team-name'))

I then used the team name to subscribe to a specific channel :

1
2
3
4
5
6
7
8
# app/assets/javascripts/channels/team.coffee
window.App.Channels ||= {}
window.App.Channels.Team ||= {}

App.Channels.Team.subscribe = (teamName) ->
  App.cable.subscriptions.create {channel: "TeamChannel", team_name: teamName},
    received: (data) ->
      # Do something with this data

The last piece is to actually start a specific channel :

1
2
3
4
5
class TeamChannel < ApplicationCable::Channel
  def subscribed
    stream_from "team_channel_#{params[:team_name]}"
  end
end

Same as before, hack a bit with your browser’s console, you should be able to check that it’s working.

Last thoughts

This is not exhaustive, depending on your situation, there might be other things you’ll need to do, like unsubscriptions for example.

I’d also like to give a word of feedback about ActionCable after this first look at it. Overall, it worked great both in development and production. Everything seemed to work almost out of the box … Except testing : I did not manage to write robust unit tests around it. There is pull request for that that should be merged in Rails 5.~ sometimes soon. For the moment, I’m sticking to large scale cucumber tests.

How I Finally Use Docker on Small Open Source Side Projects

A few months ago, I started Philou’s Planning Poker, an open source side project to run planning poker estimate sessions remotely. The main technology is Rails, and I’d been planning to use Docker as much as possible as a way to learn it. Indeed, I learned that Docker is no Silver Bullet !

The Docker love phase

At first everything seemed great about Docker. I’d used it on toy projects and it proved great to quickly setup cheap and fast virtual machines. I even created the Rubybox project on Github to clone new ruby VMs in a matter of seconds. I also used Docker to host my Octopress environment to write this blog. As a long time Linux user, my dev machines have repeatedly been suffering from pollution : after some time, they get plagued with all the stuff I installed to do my various dev experiments, and at some point, re-install seems easier than cleaning up all the mess. If I could use containers for all my projects, Docker would be a cure for this.

Going through all these successes, when I started my planning poker app, I decided to go all into Docker, development, CI and deployment. You can read the log of how I did that in these posts. Fast forward a bit of searching, experimenting and deploying, all was setup : my dev env was in containers, my CI was running in containers in CircleCI and the app was pushed to containers on DgitalOcean.

Reality strikes back

At first, everything seemed to be working fine. Even if there were a few glitches that I would have to fix down the road like :

  • Whenever I wanted to update my app’s dependencies, I had to run bundle update twice, and not incrementally. Surely, I would manage to fix that with a bit of time
  • Obviously, the CI was slower, because it had to build the containers before deploying them to Docker Hub, but that was the price to pay in order to know exactly what was running on the server … right ?
  • And … Guard notifications did not appear on my desktop. I was accessing my dev env through ssh, so I would have to fix that, just a few hours and it should be working

After a while, I got used to my work environment and became almost as productive as I used to be … but you know, shit happens !

  • I had to install PhantomJS on my CI, and if that comes out of the box on TravisCI, you’re all alone in your own containers. Installing this on the Debian container proved unnecessarily complex, but I figured it out
  • Then all of a sudden, my CI started to break … You can read a summary of what I did to fix it here. Long story short : I had forgotten to clean up old docker images, and after enough deployments, the server ran out of space, and that corrupted the docker cache somehow. I eventually re-installed and upgraded the deployment VM. That made me lose quite some time though.
  • Finally, as I started to play with ActionCable, I could not get the web-socket notifications through my dev host. There must be some settings and configuration to make this work, for sure, but it’s supposed to work out of the box.

Eventually, this last issue convinced me to change my setup. All these usages of Docker where definitely worth it from a learning point of view, but as my focus moved to actually building the app, it was time to take pragmatic decisions.

My use of Docker now

There were 2 main ideas driving my changes to my dev env for this open source side project :

  1. Use the thing most people do
  2. Use commercially supported services & tools

These should avoid losing my time instead of being productive. My setup is now almost boring ! To summarize I now use TravisCI, Heroku, and rbenv on my physical machine. I kept Docker where it really shines : all the local servers required for development are managed by Docker Compose. Here is my docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
db:
  image: postgres:9.4.5
  volumes:
    - planning-poker-postgres:/var/lib/postgresql/data
  ports:
    - "5432:5432"

redis:
  image: redis:3.2-alpine
  volumes:
    - planning-poker-redis:/var/lib/redis/data
  ports:
    - "6379:6379"

This saves me from installing Postgresql or Redis on my dev machine, and I can start all the services required for app with a single docker-compose up command !

My future uses of Docker

More generally, in the near future, here is when I’ll use docker

  • As I just said, to manage local servers
  • To boot quick and cheap VMs (check rubybox)
  • To handle CI and deployment of large or non-standard systems, where Docker can provide a lot of benefits in terms of price, scaling or configurability

Docker came from the deployment world, and this is where it is so great. As of today though, even if it is usable as dev VM, it is still not up to a standard dev machine. Despite that, all the issues I ran into could be fixed, and I’m pretty sure they’ll be some day.

Developer ! Are You Losing Your Rat Race ?

A rat race is an endless, self-defeating, or pointless pursuit. It conjures up the image of the futile efforts of a lab rat trying to escape while running around a maze or in a wheel.

Are we building our own maze self defeating landscape by our exacerbated focus on technology ? Let me explain.

The context

As Marc Andreessen famously said “Software is eating the world”, which means that there is more and more demand for software. At the same time, giant countries like China, India, Russia or Brazil are producing more and more master’s degrees every year. This also means more and more software engineers. The consequence is that there has never been so many new technologies emerging than these days. The software landscape his huge, growing and complex.

That’s great for progress, but it’s a puzzle for hiring. In this chaotic environment, years of experience with a particular technology is something that remains easy to measure, that’s why employers (and developers) tend to use keywords to cast for a job.

The effects

As a result, developers tend to pick a few technologies to become masters at, to put them on their CV and get job offers. There’s a danger with specializing on a particular technology : eventually, it will become deprecated, in this keyword driven world, it’s almost like if you’ll have to start from zero again. Even if a specialization is wide enough now, as time goes on and more and more technologies are created, any area of expertise will become a tiny spot in all the landscape. One might think this is only an issue for old guys that did not stay up to date … I strongly believe this is wrong, it happened to all past technologies, I don’t see why today’s latest .js framework wouldn’t be legacy stuff one day.

One could think that sticking to a good employer is a good fix against that. It is … for some time ! Sticking to an company actually means betting on this company. What would happen if it went out of business, or through difficult times and you’re asked to leave ? When you reach the job market after so long with a single employer, you’ll be a de-facto specialist, on proprietary stuff that no one is interested about.

Finally, you might work hard not to specialize, but it’s going to be a lot more difficult to get a job as a generalist, only a few shops actually hire this way.

To summarize, we are forced into specialization, which is great in the short term, but risky in the long run.

1€ advice

So what can we do about this ? Obviously, we cannot change the world … The only ones we can act on are ourselves !

Learning

In our fast moving tech world, learning remains key ! But instead of trying to keep up with all the cool new techs that are invented every day, we should study fundamental skills, and only learn just enough specific skills to get the job done. To me fundamental skills are all the things you’ll apply whatever the language and technology you are using, for example :

  • design
  • architecture (whatever that is …)
  • clean code
  • refactoring
  • legacy code
  • testing
  • tooling
  • mentoring & coaching
  • programming paradigms (functional, dynamic, static, imperative, OO, concurrent …)
  • process flow
  • communication
  • product definition
  • concurrency
  • performance

I wrote this post that explains how I did learn some of these skills (by no mean would I say that this is the only way). Good mastery of these skills should be enough to quickly get up to speed in any project you are involved. This other article In 2017, learn every language, which I found through the excellent hackernewsletter, explains how this is possible.

Unfortunately, knowing is not enough …

Selling

How do you convince others that you are up to the job in a particular technology ? Unfortunately, I don’t have a definitive answer yet …

Regularly, people try to coin a word to describe the competent generalist developer : polyglot, full stack, craftsman … If it’s good enough, it usually gets taken over quite fast by the industry and just becomes yet another buzzword (the only exception being eXtreme Programming, but who would like to hire and eXtreme Programmer ?).

In Soft Skills, John Somnez says the trick is to explain to people that you might not have experience in a technology ‘yet’. This might work, if your resume gets through, which is not sure.

Here’s my try : the next time I’ll polish my resume, I’ll try to put forward my fundamental skills first, for example with 5 stars self-assessments. Only after will I add something like “By the way, I could work with tech X Y Z …”.

Independence

Being your own boss could be a solution in the long term. I recently listened to The End of Jobs in which the author explains that entrepreneurship is an accessible alternative these days, and that like any skill, it’s learnable. The catch is that there are no schools, no diplomas, and that it seems a lot riskier in the short run. Despite that, he makes the point that the skills you’ll learn makes it quite safe in the long run !

Questions

I feel like my post asks more questions than it provides answers :–). Honestly, I’d really love to read other people’s opinions and ideas. What are your tricks to market yourself on new technologies ? As a community, what could we do to fight our planned obsolescence ? Do you think I’m totally wrong and that the problem does not exist ? What do you think ?

How I Fixed ‘Devicemapper’ Error When Deploying My Docker App

A few months ago, I started continuously deploying my latest side project to a Digital Ocean box. If you are interested, here is the full story of how I did it. All was going pretty well until last week, when the builds unexpectedly started to fail. I wasn’t getting the same error at every build, but it was always the Docker deployment that failed. Here are the kind of errors I got :

1
2
3
4
5
6
7
8
# At first, it could not connect to the db container
PG::ConnectionBad: could not translate host name "db" to address: Name or service not known

# Then I started to have weird EOF errors
docker stderr: failed to register layer: ApplyLayer exit status 1 stdout:  stderr: unexpected EOF

# Eventually, I got some devicemapper errors
docker stderr: failed to register layer: devicemapper: Error running deviceCreate (createSnapDevice) dm_task_run failed

You can read the full error logs here.

That’s what happens when you go cheap !

After searching the internet a bit, I found this issue which made me understand that my server had ran out of disk space because of old versions of my docker images. I tried to remove them, but the commands were failing. After some more search, I found this other issue and came to the conclusion that there was no solution except resetting docker completely. Hopefully, Digital Ocean has a button for rebuilding the VM.

Once the VM was rebuilt, the first thing that I did was to try to connect from my shell on my local machine. I had to clean up my known host file, but that was simple enough.

1
nano ~/.ssh/known_hosts

Once this was done, I just followed the steps I had documented in my previous blog post

Was I all done ?

Almost … I ran into another kind of errors this time. Processes kept getting killed on my VM.

1
2
3
4
5
6
INFO [cc536697] Running /usr/bin/env docker-compose -f docker-compose.production.yml run app bundle exec rake db:migrate as root@104.131.47.10
rake aborted!
SSHKit::Runner::ExecuteError: Exception while executing as root@104.131.47.10: docker-compose exit status: 137
docker-compose stdout: Nothing written
docker-compose stderr: Starting root_db_1
bash: line 1: 18576 Killed

After some more Google searching, I discovered that this time, the VM was running out of memory ! The fast fix was to upgrade the VM (at the extra cost of 5$ / month).

After increasing the memory (and disk space) of the VM, deployment went like a charm. Others have fixed the same issue for free by adding a swap partition to the VM.

The end of the story

I wasted quite some time on this, but it taught me some lessons :

  1. I should have taken care of cleaning up the old images and containers, at least manually, at best automatically
  2. I should write a script to provision a new server
  3. The cheap options always come at a cost
  4. For an open source side project like this one, it might be a better strategy to only use Docker to setup my dev env, and use free services like Travis-ci and Heroku for production
  5. Doing everything myself is not a good recipe to getting things done … I well past time I leave my developer hat for an entrepreneur cap
  6. In order to keep learning and experimenting, focused 20h sessions of deliberate practice might be the most time effective solution