How to Measure Your Speed With Your Business Value ? (Lean Software Development Part 3)

There is a french idiom that basicaly is

No use to run, all that is needed is to start on time …

an agile engineer would add

… and to go in the good direction

Indeed, velocity or mean cycle time as speed measures have their shortcomings :

  • Can be falsified by story point inflation !
  • Does not tell the team or its product owner whether they are working on the right thing.

Wouldn’t it be great if we could track the business value we are creating instead ? Wouldn’t it be more motivating for developpers to know how they are contributing to the bottom line ? Wouldn’t it help various people to align inside the organization ?

How to track and prioritize based on your business value

From Reinertsen’s Flow book, we learned that the cost of delay is the main driver of the value creation : the faster you deliver a feature, less you spend the cost of delay of that feature, the more value you are creating for your company. This article suggests that the cost of delay can be computed with the following formula :

cost of delay = f(user business value) + g(time criticality) + h(risk reduction opportunity)

This other article suggests that they are different types of tasks that corresponds to the different terms of the formula above.

"Different kind of tasks"

Here is how we could link the 2 articles :

  • Stories with deadlines : either through legal or market constraints, not doing this will put you out of business (‘time criticality’ in the formula)
  • Stories that influence the bottom line : by increasing the sales figures when delivered, or decreasing them when not delivered, which is kind of the same (‘user business value’ in the formula)
  • Risk reduction tasks : by mitigating risk or streamlining the process, these tasks actually improve the bottom line of other stories (‘risk reduction opportunity’ in the formula)

The later type of task will be detailed in other articles. Let’s focus on the two others.

The case of the deadlined feature ?

First, I’d say its business value is 0, until it’s too late. You should not be working on it too soon, but you should not be working on it too late neither !

In his book The Art of Agile Development James Shore details in great details how an agile team can commit to deliverables (I really recommend this part, I might even write a post about it). He explains that in order to commit, teams should multiply their estimates by 4, or by 1.8 if they are very rigourous in their application of all the XP practices.

So a rule to handle such a task could be to

  • estimate it
  • multiply that estimate by 4
  • substract this time from the deadline
  • prioritize it so that it can be started at that date, but not earlier
  • don’t expect to be creating any value by completing these stories

What’s the business value of other typical user stories

This article suggests that in this case the cost of delay is equal to the business value of the feature for the user. But how can we have an idea of its actual user business value ?

Before actually selling and getting the money, it’s just an estimation. With the good information, some people will make better estimates than others, never the less, it’s still an estimate. Let’s try a “Business Value Poker” ! Here are a few ideas about how to conduct this:

  • Estimate business value at the same time as you estimate the complexity of a story
  • Create some business value $ poker estimate cards, write an app for this, or bring in some Poker Chips to estimate the value
  • Invite some business people (sales, marketting …) to the meeting to get real knowledge (being organized as a feature team will help)

"Hands pushing poker chips for an all-in"

At the end, when you have the estimated cost of delay and duration of every task, you can directly prioritize using the WSJF (Weighted Shortest Job First) :

WSJF = Cost of Delay / Duration

Just do the tasks by decreasing order of WSJF.

At the end of the sprint, just as we track the story points we completed with the velocity, we could track the business value we created, that would be our business value speed. If you have access to real sales numbers, it might be interesting to see if it’s possible to correlate the figures.

Afterthoughts

The more I learn about Lean principles, the more I find our current Issues Tracking Systems (I’m used to Jira) limited. They seem to be databases with a nice UI, whereas what we need are tools to help us to make better decisions out of the multitude of items … How come they do not provide something as simple as the WSJF ?

This was part 3 of my suite of article about Lean Software Development, Part 2 was 2. What’s in Kent Beck’s eXtreme Programming Lean diet ?, Part 4 will be “Take informed high payoff risks”.

What’s in Kent Beck’s eXtreme Programming Lean Diet ? (Lean Software Development Part 2)

I’ve been programming for quite some time now, in different teams, using various methodologies. I also had the luck to do XP for at least 3 different projects. To me the conclusion is obvious, XP delivers more. Even better, programmers working with XP seem to be happier. The only thing I’ve seen that works better than XP, is fine tunning it once the team has mastered the main principles.

"An extreme diet pill bottle"

XP was first put in place at the Chrysler C3 project for SmallTalk performance issues. After being called for performance issues, Kent Beck discovered that these were only the tip of the iceberg, everything was going astray. As the expert in the room, people started to ask him how to organize. I remember reading some time ago that without having thought about it before, he gathered all the most efficient programming techniques he knew together into a new process. XP was born.

So the question is : what did Kent Beck put in XP so that it works so well ? Let’s go through the Flow book and its 175 lean product development principles, to see if we get some explanations.

Concentric circles featuring the 12 core xp practices"

Going through the 12 core XP practices, the main Scrum ceremonies and a few common additions, I’ll try to explain why they work through the Flow book’s principles.

Whole Team

This is the same thing as Pizza Team, Feature team, Cross Functional Team. It just means put everyone involved in the creation of the product in the same room (On site customer, sales, product people, programmers, quality controllers, operation people …).

Ref The principle of … Summary page
B17 Proximity Proximity enables small batch sizes 129
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
W13 Skill overlap Cross-train resources at adjacent processes 156
F24 Alternate routes Develop and maintain alternate routes around points of congestion 201
F25 Flexible resources Use flexible resources to absord variation 202
FF14 Locality of feedback Whenever possible, make the feedback local 226
FF19 Colocation Colocation improves almost all aspects of communication 230
FF23 Overlapping measurement To align behaviors, reward people for the work of others 233
D7 Alignment There is more value created with overall alignment than local execellence 252
D13 Peer-level coordination Tactical coordination should be local 257
D18 Response frequency We can’t respond faster than our (internal) response frequency 261
D22 Face-to-face communication Exploit the speed and bandwidth of face-to-face communications 263

Planning Game

The work is split into user stories. The customer then estimates the business value of each story, before the programmers poker estimate the required work for them. The game is then to maximize the scheduled business value creation for the coming iteration (1 to 3 weeks).

Ref The principle of … Summary page
E4 Economic value-added The value added by an activity is the change in the economic value of the work product 32
E7 Imperfection Even imperfect answers improve decision making 36
E9 Continuous economic tradeoffs Economic choices must be made continuously 37
E10 Perishability I Many economic choices are more valuable when made quickly 38
E14 Market I Ensure decision makers feel both cost and benefit 42
E15 Optimium decision timing Every decision has its optimum economic timing 44
Q10 Queueing discipline Queue cost is affected by the sequence in which we handle the jobs in the queue 69
Q13 Queue size control I Don’t control capacity utilization, control queue size 75
Q14 Queue size control II Don’t control the cycle time, control queue size 76
V5 Variability pooling Overall variation decreases when uncorrelated random tasks are combined 95
V6 Short-term forcasting Forecasting becomes exponentially easier at short time-horizons 96
B18 Run length Short run lengths reduce queues 130
B20 Batch content Sequence first that which adds value most cheaply 131
W1 WIP constraints Constrain WIP to control cycle time and flow 145
W3 global constraints Use global constraints for predictable and permanent bottlenecks 147
W6 Demand blocking Block all demand when WIP reaches its upper limit 151
W8 Flexible requirements Control WIP by shedding requirements 152
W19 Adaptive WIP constraints Adjust WIP constraints as capacity changes 162
F5 Periodic resynchronization Use a regular cadence to limit the accumulation of variance 177
F7 The cadence reliability Use cadence to make waiting times predictable 179
F9 Cadenced meetings Schedule frequent meetings using a predictable cadence 180
F18 The local priority Priorities are inherently local 196
FF10 Agility I We don’t need long planning horizons when we have a short turning radius 222
FF21 Hurry-up-and-wait Large queues make it hard to create urgency 232
D4 Opportunity Adjust the plan for unplanned obstacles and opportunities 249
D14 Flexible plans Use simple modular plans 258

Small Releases

Make a lot of small releases.

Ref The principle of … Summary page
Q2 Queueing waste Queues are the root cause of the majority of economic waste in product development 56
V8 Repetition Repetition reduces variation 99
B1-8 Batch size Reducing batch size reduces cycle time, variability in flow, risk, overhead and accelerates feedback, while large batches reduces efficiency, lower motivation and urgency and cause exponential cost and schedule growth 112-117
F8 Cadenced batch size enabling Use a regular cadence to enable small batch size 179
FF7 Queue reduction by feedback Fast feedback enables smaller queues 220
FF8 Fast-learning Use fast feedback to make learning faster and more efficient 220
FF11 Batch size feedback Small batches yield fast feedback 223
FF20 Empowerment by feedback Fast feedback gives a sense of control 231
FF21 Hurry-up-and-wait Large queues make it hard to create urgency 232
D23 Trust Trust is built through experience 264

Customer Tests

The customer assists the programmers into writing automated use case tests.

Ref The principle of … Summary page
V16 Variability displacements Move variability to the process stage where its cost is lowest 107
B17 Proximity Proximity enables small batch sizes 129
F30 Flow conditioning Reduce variability before a bottleneck 208
FF7 Queue reduction by feedback Fast feedback enables smaller queues 220
FF8 Fast-learning Use fast feedback to make learning faster and more efficient 220
FF11 Batch size feedback Small batches yield fast feedback 223
FF14 Locality of feedback Whenever possible, make the feedback local 226
FF19 Colocation Colocation improves almost all aspects of communication 230
FF20 Empowerment by feedback Fast feedback gives a sense of control 231
D8 Mission Specify the end state, its purpose and the minimum possible constraints 252
D16 Early contact Make early and meaningful contact with the problem 259

Collective Code Ownership

Every programmer is responsible to evolve and maintain all the source code, and not just his part.

Ref The principle of … Summary page
Q7 Queuing structure Serve pooled demand with reliable high-capacity servers 64
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
F25 Flexible resources Use flexible resources to absord variation 202
FF23 Overlapping measurement To align behaviors, reward people for the work of others 233
D1 Perishablility II Decentralize control for problems and opportunities that age poorly 246
D4 Virtual centralization Be able to quickly reorganize decentralized resources to create centralized power 250
D5 Inefficiency The inefficiency of decentralization (as opposed to silos) can cost less than the value of faster reponse time 251

Coding Standards

All programmers agree on coding conventions for all the source code they write.

Ref The principle of … Summary page
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
F25 Flexible resources Use flexible resources to absord variation 202

Sustainable Pace

As the value created by a knowledge work does not increase linearly with the time spent, it’s wiser to work a number of hours that both maximizes the work done while allowing the team to keep on going forever if needed.

Ref The principle of … Summary page
E5 Inactivity Watch the work product, not the worker 33
Q3 Queueing capacity utilization Capacity utilization increases queues exponentially 59
B9 Batch size death spiral Large batches lead to even large batches 118

Metaphor

Whether an actual metaphor or an ubiquitous language, the idea is to build a shared customer oriented architecture and design of the system.

Ref The principle of … Summary page
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
F25 Flexible resources Use flexible resources to absord variation 202

Continuous Integration

All the code of all the team is merged, tested, packaged and deployed very frequently (many times per day)

Ref The principle of … Summary page
Q2 Queueing waste Queues are the root cause of the majority fo economic waste in product development 56
V8 Repetition Repetition reduces variation 99
B1-8 Batch size Reducing batch size reduces cycle time, variability in flow, risk, overhead and accelerates feedback, while large batches reduces efficiency, lower motivation and urgency and cause exponential cost and schedule growth 112-117
B12 Low transaction cost Reducing the transaction cost per batch lowers overall costs 123
B16 Transport batches The most important batch is the transport batch 128
B19 Infrastructure Good infrastructure enables small batches 130
F29 Resource centralization Correctly managed, centralized resources can reduce queues 206
FF7 Queue reduction by feedback Fast feedback enables smaller queues 220
FF11 Batch size feedback Small batches yield fast feedback 223
FF16 Multiple control loops Embed fast control loops inside slow loops 228
FF21 Hurry-up-and-wait Large queues make it hard to create urgency 232

Test Driven Development

Programmers write failing tests (both customer and unit tests) before actual real code

Ref The principle of … Summary page
V15 Iteration speed it is usually better to improve iteration speed than defect rate 106
V16 Variability displacements move variability to the process stage where its cost is lowest 107
B1-8 Batch size Reducing batch size reduces cycle time, variability in flow, risk, overhead and accelerates feedback, while large batches reduces efficiency, lower motivation and urgency and cause exponential cost and schedule growth 112-117
F30 Flow conditioning Reduce variability before a bottleneck 208
FF7 Queue reduction by feedback Fast feedback enables smaller queues 220
FF8 The fast-learning principle Use fast feedback to make learning faster and more efficient 220
FF11 The batch size principle of feedback Small batches yield fast feedback 223
FF14 The locality principle of feedback Whenever possible, make the feedback local 226
FF16 The principle of multiple control loops Embed fast control loops inside slow loops 228
FF20 The empowerment principle of feedback Fast feedback gives a sense of control 231

Refactoring

Programmers improve the design of the system continuously, meaning in very frequent baby steps. This removes the need for a big design up front.

Ref The principle of … Summary page
E13 The first decision rule principle Use decision rules to decentralize economic control 41
V9 The reuse principle Reuse reduces variability 100
B9 The batch size death spiral principle Large batches lead to even large batches 118
B19 The infrastructure principle Good infrastructure enables small batches 130
F28 The principle of preplanned flexibility For fast responses, preplan and invest in flexibility 205
D12 The second agility principle Develop the ability to quickly shift focus 255

Simple Design

Do the simplest thing that could possibly work. No need to write things that don’t add business value yet. (Note that simple does not mean easy)

Ref The principle of … Summary page
E19 Insurance Don’t pay more for insurance than the expected loss 49
V12 Variability consequences Reducing consequences is usually the best way to reduce the cost of variability 103
B9 Batch size death spiral Large batches lead to even large batches 118
B15 Fluidity Loose coupling between product subsystems enables small batches 126
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
F25 Flexible resources Use flexible resources to absorb variation 202
D12 Agility II Develop the ability to quickly shift focus 255

Pair Programming

Programmers sit at the same computer in pairs to write code. One write the code, and the other comments. The keyboard changes hands very frequently.

Ref The principle of … Summary page
B13 Batch size diseconomies Batch size reduction saves much more than you think 124
B21 Batch size I Reduce the batch size before you attack bottlenecks 133
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
W13 Skill overlap Cross-train resources at adjacent processes 156
F25 Flexible resources Use flexible resources to absord variation 202
F30 Flow conditioning Reduce variability before a bottleneck 208
FF14 Locality of feedback Whenever possible, make the feedback local 226
FF16 Multiple control loops Embed fast control loops inside slow loops 228
FF19 Colocation Colocation improves almost all aspects of communication 230
FF20 Empowerment by feedback Fast feedback gives a sense of control 231
D13 Peer-level coordination Tactical coordination should be local 257
D22 Face-to-face communication Exploit the speed and bandwidth of face-to-face communications 263

Spikes

Programmers conduct time boxed experiment to gain insights

Ref The principle of … Summary page
V2 Asymmetric payoffs Payoff asymmetries enable variability to create economic value 88
V7 Small experiments Many small experiments produce less variation than one big one 98

Slack Time

Keep some buffer time at the end of the iteration where team members can either close the remaining stories or work on improvements.

Ref The principle of … Summary page
V11 Buffer Buffers trade money for variability reduction 101
B9 Batch size death spiral Large batches lead to even large batches 118
B19 Infrastructure Good infrastructure enables small batches 130
F6 Ccadence capacity margin Provide sufficient capacity margin to enable cadence 178
D12 Agility II Develop the ability to quickly shift focus 255
D15 Tactical reserves Decentralize a portion of reserves 258

Daily Stand Up Meeting

The whole team starts every working day by a quick synchronization meeting

Ref The principle of … Summary page
B3 Batch size feedback Reducing batch size accelerate feedback 113
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
W20 Expansion control Prevent uncontrolled expansion of work 163
F5 Periodic resynchronization Use a regular cadence to limit the accumulation of variance 177
F9 Cadenced meetings Schedule frequent meetings using a predictable cadence 180

Retrospective meeting

At the end of every iteration, the team meets for a retrospective, discussing what they did in order to improve

Ref The principle of … Summary page
B9 Batch size death spiral Large batches lead to even large batches 118
B19 Infrastructure Good infrastructure enables small batches 130
F9 Cadenced meetings Schedule frequent meetings using a predictable cadence 180
FF8 Fast-learning Use fast feedback to make learning faster and more efficient 220
FF20 Empowerment by feedback Fast feedback gives a sense of control 231
D21 Regenerative initiative Cultivating initiative enables us to use initiative 263

Demos

At the end of every iteration, the team demonstrates what it did to the customer

Ref The principle of … Summary page
E14 Market I Ensure decision makers feel both cost and benefit 42
B3 Batch size feedback Reducing batch size accelerate feedback 113
B9 Batch size death spiral Large batches lead to even large batches 118
F9 Cadenced meetings Schedule frequent meetings using a predictable cadence 180
FF7 Queue reduction by feedback Fast feedback enables smaller queues 220
FF8 Fast-learning Use fast feedback to make learning faster and more efficient 220
FF20 Empowerment by feedback Fast feedback gives a sense of control 231
FF21 Hurry-up-and-wait Large queues make it hard to create urgency 232
FF23 Overlapping measurement To align behaviors, reward people for the work of others 233
D23 Trust Trust is built through experience 264

Visual Whiteboard

Display the stories of the current sprint on the wall in a 3 columns whiteboard (TODO, DOING, DONE)

Ref The principle of … Summary page
W23 Visual WIP Make WIP continuously visible 166
F27 Local transparency Make tasks and resources reciprocally visible at adjacent processes 204
D17 Decentralized information For decentralized decisions, disseminate key information widely 260

Conclusion

Whaoo that’s a lot ! I did not expect to find so many principles underlying XP (I even removed principles that were not self explanatory). For the XP practitioner that I am, writing this blog post helped me to deepen understanding of it. As XPers know, XP is quite opiniated, it’s both a strength and a weakness if you try to apply it outside of its zone of comfort. This explains why some lean subjects are simply not addressed by XP.

To summarize, here is where XP hits the ground :

  • In spite of its image of ‘a process for nerdy programmers’ XP turns out to be a quite evolved lean method !
  • XP anihilates batch size and feedback time
  • Pair programming is well explained

And here is where to look at when you’ll need to upgrade XP

  • Better tradeoffs might be found with a real quantitative economical framework
  • Synchronization principles might help working with other teams

Kent Beck could not have read the Flow book when he invented XP, but it seems he just had a bit of advance on the rest of us …

This was part 2 of my suite of article about Lean Software Development, Part 1 was The Flow book summary, Part 3 will be How to measure your speed with your business value ?

The Flow Book Summary (Lean Software Development Part 1)

A few weeks ago, I read The principles of product development flow from Donald G. Reinertsen.

I read it both for work and for my side projects, and I think it will be useful for both. The book is about lean product development, and is in fact a collection of 175 lean principles that one can study and understand in order to make better decisions when developing new products. The principles are divided into the following 8 categories

  1. Economics
  2. Queues
  3. Variability
  4. Batch sise
  5. WIP constraints
  6. Cadence, synchronization and flow control
  7. Fast feedback
  8. Decentralized control

I really loved the book. I have not been thrilled like that by a book since Kent Beck’s 1st edition of Extreme Programming Explained. Where Kent Beck described some values, principles and practices that work. D.G. Reinertsen has the ambition to help us to quantify these practices in order not move from belief based to fact based decisions. For this, he gives us the keys to creating an economical framework with which we should be able to convert any change option to its economical cost

The cover of the book "Extreme Programming Explained"

Lately, I’ve been thinking of an economical framework of my own, that I could use on the projects I am currently involved in. This post is the first of a series about this :

  1. The Flow book summary
  2. What’s in Kent Beck’s eXtreme Programming Lean diet ?
  3. How to measure your speed with your business value ?
  4. Take informed high payoff risks
  5. Measure the value of the lean startup ‘learning’
  6. Prioritizing technical improvements
  7. Summing it up for my next side project

Next part will feature an explanation of the XP practices with the lean principles. Stay tuned.

The Holy Code Antipattern

As I’ve encountered this situation in different disguise in different companies, I now assume it’s a widely applied antipattern.

Context

A team of programmers inherits a piece of code from one of their bosses. They find it really difficult to maintain : it is difficult to understand, fix, and change.

The Antipattern

As this piece of code seems too complex to be maintained by a team of simple programmers, as the boss, just forbid them :

  • to refactor any part of it
  • to rewrite it from scratch
  • to use something else instead

Consequences

  • This often limits the number of bugs that appear in this library, but …
  • It slows down development, because of the micro management required to enforce this pattern
  • It frustrates programmers, and it is likely that the best ones will leave
  • It prevents better design
  • Even worse, in the long run, it prevents great domain driven design to emerge from merciless refactoring
  • In the end, it makes the whole organization less performant

Examples

  • Your boss wrote something a few years ago, if the domain is more or less complex, the resulting code is complicated. The subject eventually got the reputation of being ‘touchy’. Your boss is the only person who effectively manages to change anything in there. He’s a bit afraid that by trying to improve it, the whole thing might just break down and become a bug nest. So, now that he has some authority, he forbids anyone to touch it. If a change is finally required, he’ll micro manage it !

  • Your big boss spent some over time on writing an uber-meta-generic-engine to solve the universe and everything. After seeing many developpers fixing the same kind of bugs over and over, he decides that it’s time to remove the dust from his compiler and that he starts building something that should solve the root cause of all these. In the spirit of the second system effect, he adds all bells and whistle to his beloved project, trying to incorporate a solution to every different issue he has seen during the last decade. This code grows and grows in total isolation of any real working software. When he eventually thinks it is ready, he justs drops the whole thing to your team, that is now responsible to integrate and use this thing in the running system. He’s micro managing the whole thing, and you don’t have any choice but to comply and succeed. This usually generates gazillions of bugs, makes projects really late and ruins the developpers’ lives.

Alternatives

  • Use collective code ownership so that knowledge about the code is shared by design
  • Trust programmers to design and architecture the system
  • Use constant refactoring to let tailor made domain driven designs emerge from the system

RIP mes-courses.fr

Rest In Peace mes-courses.fr. Here is what it looked like :

I wanted to create a really fast on-line grocery front-end, where people could shop for the week in only 5 minutes. It supported shopping recipes instead of individual items but I also envisioned to allow automatic menus recommendations, and automatic item preference selection. I started 4 years ago, and this is my last doing on the subject :). If you’re thinking about starting your own side project, this post is for you.

Here are the lessons I learned

  • As a professional programmer, I largely underestimated the non programming time required for a serious side project. It represents more than half the time you’ll spend on your project (marketing, discussing with people, mock ups and prototypes)
  • When I started, I kind of estimated the time it would take me to build a first prototype. Again I ridiculously underestimated this :
    • because of the previous point
    • because on a side project, you’ll be on your own to handle any infra issue
    • because you don’t have 8 hours per day to spend to your project (As a professional developer, dad of 2, I only managed to get 10 to 15 hours per week)
  • A small project does not require as much infrastructure as a big one. I lost some time doing things as I do when working on projects with more than 100K lines of code. So next time :
    • I’ll stick to end to end cucumber tests for as long as possible
    • I’ll use an economical framework like described in Donald G. Reinertsen’s Flow book to prioritize improvements vs features
  • Eventually, what killed me was that I could not go around the “experiment –> adapt” loop fast enough. The project was just too big for my time
    • I’ll try to select a subject a project that suits my constraints of time and money
    • This will be one of the first hypotheses that I’m willing to verify
    • Web marketing and HTML design are more important than coding skills to run experiments : I’m learning both
  • Scrapping is a time hog. I won’t start any side project involving scrapping anymore.
  • Using on-line services always saved me a lot of time. They are a lot more reliable than anything I could setup. Mainly, this was :
    • Mailing services
    • Cloud deployment
  • Go the standard way. Again, anytime I did something a bit weird, it turned out to cost me some time
    • Use standard open source software, stick to the latest version
    • Use standard and wide spread technology
  • Automated testing and deployment saved me some time from the start. Especially with the small amount of time that I could spend on my project, it was really easy to forget details and to make mistakes.
    • Here is the Heroku deployment script I used to test and deploy in a single shell call
    • And here is a Heroku workaround to run some cron tasks weekly, this allowed me to run some scrapping tests every week on Heroku
  • It took all my time ! Next time I start a side project, I’ll be prepared to
    • Spend all my free time on it (my time was divided between day-job, family, side project)
    • Spend all my learning time (books, on-line trainings …) for it
    • Choose something that I am passionate about !
    • Choose a different kind of project to fit my constraints
      • Joining an existing open source project would let me focus on technical work at my own pace
      • Volunteer for a not for profit project might be less time intensive while allowing some fulfilment
  • I did my project alone, and it was hard to keep my motivation high on the long run. Next time :
    • I’ll join someone else
    • I’ll time box my project before a pivot or starting something completely different
  • I did not manage to get anything done before I settled a regular daily rhythm. I used to work from 5:30am to 7:30am, I first tried in the evening, but after a day’s work, I was too tired to be really productive.
  • When I could afford it, paying for things or services really saved me some time. I’m thinking of
    • A fast computer
    • Some paying on-line services

It is sure that doing a side project seriously is a heavy time investment, but there’s also a lot of benefits !

Here is what I gained

  • Real experience with new technologies. For me, this included
    • Ruby on Rails
    • Coffeescript
    • HTML scrapping
    • Dev-ops practices with Heroku
    • Web design with HTML and CSS
  • I also learned a lot of non technical skills in which I was completely inexperienced
    • Web marketing
    • Blogging
    • Mailing
  • Trying to bootstrap a for profit side project is like running a micro company, it’s a good opportunity to understand how a company is ran. This can help you to become a better professional during your day-job.
  • Having control on everything is a good situation to use Lean techniques.
  • Failing allowed me to actually understand Lean Start up ! The ideas are easy to understand in theory, the practice is a very different thing. It should help me for my next project.
  • Resolving real problems on my own was a very good source for valuable blog articles.
  • I collaborated with very clever people on open source libraries
    • By fixing some bugs in some libraries I was using
    • By releasing some parts of my code as open source libraries

Next time, I hope I’ll get more euros as well !

You’ve got nothing to loose from trying ! Just do it. Give yourself 1 year to get some small success, and then continue or repeat with something else !

Enabling Agile Practices and Elephant Taming

Everybody knows about the agile software development promise “Regularly and continuously deliver value”. This is how it is supposed to work :

  • Iterative
  • Focusing on what is needed now
  • Release as soon as possible
  • Planning small stories according to the team’s velocity

It all seems common sense and simple. Especialy for people who don’t code. That’s not the whole story though, let’s have a look at a few variations :

Suppose a team uses Scrum but does not do any automated testing. As soon as the software will be used, bugs will create havoc in the planning. The velocity will quickly fall, within a few monthes, the team won’t be able to add any value. Surely, things could be improved with some rewrite and upfront design … this does not sound like Scrum anymore.

Now let’s suppose that another team is also using Scrum, uses automated tests, but missunderstood Sprint and KISS for quick-and-dirty-coding. Hopefully, this team won’t get too many bugs in production ! Unfortunately, any change to the source code will trigger hundreds of test failures : again, the velocity will decrease. I’ve been in such projects, in about 2 years, the team got really slow, and might eventually drop their test suit …

These two examples show that automated testing improves the situation, but also that it is not enough ! There are quite a few agile practices that are in fact enabling practices. These are the practices that are required for the process to accomplish the agile promise described at the begining of this article. Most come from eXtreme Programming and have been reincarnated through Software Craftsmanship. That’s what Kent Beck meant when he said that XP practices reinforce each other. Here are a few examples :

For example let’s take coding standards and pair programming which really seem to be a programmer choice. It turns out that they help to achieve collective code ownership. Which in turn helps to get ‘switchable’ team members. Which helps to make good team estimates. Which is required to have have a reliable velocity. Which is a must have to regularly deliver value on commitment !

It turns out that all of the other original XP practices help to achieve the agile promise.

After a lot of time spent writing software, I now tend to think of the code as the elephant in the room. It directly or indirectly constrains every decision that is make. Recognize and tame your elephant or you’ll get carted away …

… or dragged away …

… or trampled …

Cucumber_tricks Gem : My Favorite Gherkin and Cucumber Tricks

I just compiled my Gherkin and Cucumber goodies into a gem. It’s called cucumber_tricks and the source code can be found on github. It’s also tested on travis and documented in details on relish.

The goal of all these tricks is to be able to write more natural english scenarios. Here is an extract from the readme of the gem, which explains what it can do :

Use pronouns to reference previously introduced items

foo.feature

1
2
Given the tool 'screwdriver'
When this tool is used

steps.rb

1
2
3
4
5
A_TOOL = NameOrPronounTransform('tool', 'hammer')

Given /^(#{A_TOOL})$/ do |tool|
  ...
end

Use the same step implementation to handle an inline arg as a 1-cell table

steps.rb

1
2
3
4
GivenEither /^the dog named "(.*)"$)$/,
            /^the following dogs$/ do |dogs_table|
  ...
end

foo.feature

1
2
3
4
5
6
Given the dog "Rolphy"
...
Given the following dogs
  | Rex  |
  | King |
  | Volt |

Add default values to the hashes of a table

foo.feature

1
2
3
4
Given the following dogs
  | names | color |
  | Rex   | white |
  | King  | Sand  |

steps.rb

1
2
3
4
5
6
7
8
9
Given /^the following dogs$$/ do |dogs|
  hashes = dogs.hashes_with_defaults('names', 'tail' => 'wagging', 'smell' => 'not nice')

#  hashes.each do |hash|
#    expect(hash['smell']).to eq('not nice')
#  end

  ...
end

Define named lists from a table

foo.feature

1
2
3
Given the following dishes
  | Spaghetti Bolognaise | => | Spaghetti | Bolognaise sauce |       |         |
  | Burger               | => | Bread     | Meat             | Salad | Ketchup |

steps.rb

1
2
3
4
5
6
7
Given /^the following dishes$$/ do |dishes|
  name_2_dishes = dishes.hash_2_lists

#  expect(name_2_dishes['Burger']).to eq(['Bread','Meat','Salad','Ketchup'])

  ...
end

Visit relish for more detailed documentation.

My New Gem for Creating Rspec Proxies

I already wrote a lot about test proxies (here, here and here).

I just took the time to transform my previous gist in a full fledged ruby gem. It’s called “rspecproxies” and it can be found on github. It’s fully tested, documented and there’s a usage section in the readme to help anyone get started.

Here are the pain points proxies try to fix :

  • Without mocks, it is sometimes just awfully painfull to write the test (do you really want to start a background task just to get a completion ratio ?)
  • With classic stubs, you sometimes have to stub things you are not interested in in your test, you end up with unmaintainable extra long stub setup

Let’s have a look at a few examples of tests with proxies :

  • Verify actual load count without interfering in any behaviour
1
2
3
4
5
6
7
8
it 'caches users' do
  users = User.capture_results_from(:load)

  controller.login('joe', 'secret')
  controller.login('joe', 'secret')

  expect(users).to have_exactly(1).items
end
  • Use proxies to stub an object that does not yet exist
1
2
3
4
5
6
7
it 'rounds the completion ratio' do
   RenderingTask.proxy_chain(:load, :completion_ratio) {|s| s.and_return(0.2523) }

   renderingController.show

   expect(response).to include('25%')
end

I’d really love to see more code tested with proxies, it makes the whole testing so much more natural. As with any testing techniques, we get more thorough testing from the ease of writing the test.

Better Error Messages When Testing Html Views

When testing html views, either from RSpec or from Cucumber, XPath can be really helpful to quickly find expected elements.

Unfortunately, a bit like regular expressions, when you start to use xpath to solve a problem, you often end up with 2 problems … Part of the reason is that xpaths tend to be cryptic. In the case of testing, error messages coming from unmatched xpath are even more crytic !

That’s why I had the idea for xpath-specs : a small gem that allows to associate a description with an xpath, to nest xpaths together, all this to simplify tests and assertion failure reporting.

For example, with an assertion like this :

1
expect(html).to contain_a(dish_with_name("Grilled Lobster")

Here is the kind of failure message one can get :

1
2
3
4
expected the page to contain a dish that is named Grilled Lobster (//table[@id='dish-panel']//tr[td[contains(.,'#{name}')]])
       it found a dish (//table[@id='dish-panel']//tr) :
          <tr><td>Pizza</td>...</tr>
       but not a dish that is named Grilled Lobster (//table[@id='dish-panel']//tr[td[contains(.,'#{name}')]])

And here is the required setup :

1
2
3
4
5
6
7
8
9
10
11
# spec/support/knows_page_parts.rb

module KnowsPageParts
  def dish
    Xpath::Specs::PagePart.new("a dish", "//table[@id='dish-panel']//tr")
  end

  def dish_with_name(name)
    dish.that("is named #{name}", "[td[contains(.,'#{name}')]]")
  end
end

Have a look at the readme for more details.