RSpecProxies Now Supports . To Receive(xxx)… Syntax

Pure mocks are dangerous. They let defect go through, give a false sense of security and are difficult to maintain.

I’ve already talked about it before but since then, DHH announced that he was quitting TDD, the Is TDD Dead ? debate took place, and the conclusion is that mockist are dead.

They are still times when mocks feel much simpler than any other things. For example, imagine your process leaks and crashes after 10 hours, the fix is to pass an option to a thirdparty, how would you test this in a fast test ? That’s exactly the kind of situation where using test proxies saves you from mocks. A test proxy defers everything to the real object but also features unintrusive hooks and probes that you can use in your test. If you want a code example, check this commit, where I refactored a rails controller test from mocks to a RSpecProxies (v0.1).

I created RSpecProxies a while ago, a while ago, and it’s syntax made it alien to the RSpec work, it needed an update. RSpec now supports basic proxying with partial stubs, spies, the and_call_original and the and_wrap_original methods. RSpecProxies 1.0 is a collection of hooks built on top of these to make proxying easier, with a syntax that will be familiar to RSpec users.

Before original hook

This hook is triggered before a call a method. Suppose you want to simulate a bad connection :

1
2
3
4
5
6
7
8
9
10
11
it 'can simulate unreliable connection' do
  i = 0
  allow(Resource).to receive(:get).and_before_calling_original { |*args|
    i += 1
    raise RuntimeError.new if i % 3 == 0
  }

  resources = Resource.get_at_least(10)

  expect(resources.size).to eq(10)
end

After original hooks

RSpecProxies provides the same kind of hook after the call :

1
2
3
4
5
6
7
8
it 'can check that the correct data is used (using and_after_calling_original' do
  user = nil
  allow(User).to receive(:load).and_after_calling_original { |result| user = result }

  controller.login('joe', 'secret')

  expect(response).to include(user.created_at.to_s)
end

Here we are capturing the return value to use it later in the test. For this special purpose, RSpecProxies also provides 2 other helpers :

1
2
3
4
5
6
# Store the latest result in @user of self
allow(User).to receive(:load).and_capture_result_into(self, :user)

# Collect all results in the users array
users = []
allow(User).to receive(:load).and_collect_results_into(users)

Proxy chains

RSpec mocks provides the message_chain feature to do build chains of stubs. RSpecProxy provides a very similar proxy chain concept. The main difference is that it creates proxies along the way, and not pure stubs. Pure stubs assume that you are mocking everything, but as our goal is to mock as little as possible, using proxies makes more sense.

When using a mockist approach, the message chain is a bad smell because it makes your tests very brittle by depending on a lot of implementation. In contrast, proxy chains are meant to be used where they are the simplest way to inject what you need, without creating havoc.

For example, suppose you want to display the progress of a very slow background task. You could mock a lot of your objects to have a fast test, of if you wanted to avoid all the bad side effects of mocking, you could run the background task in your test, and have a slow test … Or, you could use a chain of proxies :

1
2
3
4
5
6
7
it 'can override a deep getter' do
  allow(RenderingTask).to proxy_message_chain("load.completion_ratio") { |e| e.and_return(0.2523) }

  controller.show

  expect(response).to include('25%')
end

Here the simplest thing to do is just to override a small getter, because from a functionnal point of view, that’s exactly what we want to test.

Last word

The code is on github, v1.0.0 is on rubygems, it requires Ruby v2.2.5 and RSpec v3.5, the license is MIT, help in any form are welcome !

How to Prepare a New Ruby Env in 3 Minutes Using Docker

One or two weeks ago, I registered to the Paris Ruby Workshop Meetup and needed a Ruby env. I have been using Vagrant quite a lot to isolate my different dev envs from each other and from my main machine. As I’ve been digging more into Docker lately, I thought I’d simply use Docker and Docker Compose instead.

I turned out to be dead simple. All that is needed is a docker-compose.yml file to define the container, record the shared volume and set a bundle path inside it :

1
2
3
4
5
6
7
8
rubybox:
  image: ruby:2.3
  command: bash
  working_dir: /usr/src/app
  environment:
    BUNDLE_PATH: 'vendor/bundle'
  volumes:
    - '.:/usr/src/app'

Without the custom bundle path, bundled gems would be installed elsewhere in the container, and lost at every restart.

To use the Rubybox, just type docker-compose run rubybox and you’ll get a shell from within your ruby machine, where you can do everything you want.

In fact, I found the thing so useful, that I created the Rubybox git repo to simplify cloning and reusing. I’ve already cloned it at least 3 times since then !

1
2
3
git clone git@github.com:philou/rubybox.git
cd rubybox
docker-compose run rubybox

How to Grow a Culture Book

Have you read valve’s Handbook for new employees ?

In Management 3.0 terms, that’s a culture book. It’s a great way to build and crystallize a culture, and it serves as a guide for newcomers, and can later serve as an hiring ad for your team or company.

The good thing about a culture book, is that you don’t have to write it in one go. It’s a living artifact anyway, so you’d better not ! Our current culture book has emerged from a collection of pages in our wiki.

It started as working agreements

The first real contributions to our culture book (though we did not know it at the time) was spending some time in retrospectives to define and review our working and coding conventions.

When we started doing retrospectives, we had to discuss, agree and formalize the decisions we made about our way of working. We usually did a ‘review how we work’ activity at the beginning the retros, spending 10 minutes to make sure we all understood and agreed on our current working conventions. If there was any disagreement or update required, we would discuss them during the retro, and at the end, add, remove or modify items from our agreement page.

It continued as self-organization workshops

After a while, we had built up a pretty extensive set of working and coding conventions. The team had already become quite productive, but to keep the momentum in the long run, we needed to increase self-organization. By reading Management 3.0 books and Management Workout (which has been re-edited as Managing for Happiness) in particular, I found description about how to use a delegation board and delegation pokers to measure and formalize the current delegation level of a team.

We did this, and started a lot of self-organization workshops :

After each of these workshops, we created a wiki page, explaining how we planned to handle the subject in the team.

The book

At that point, we had fairly extensive and formal descriptions of our working practices and conventions. By reading this set of pages, someone would get a pretty accurate grasp of our principles and values.

Wondering how we could write our own culture book, I had an “Aha !” moment and realized that all I had to do was to create a wiki page pointing to all our different agreement pages. This only took 5 minutes.

At the moment, our culture book serves 3 purposes :

  • documentation for the team members
  • guide for newcomers
  • description about how we work for people in the company who might want to move to our team

Next step would be to add a dash of design, a few war stories, export it as a PDF, and use it outside to advertise the team and the company.

When the Boy Scout Rule Fails

Here goes the boy scout rule :

Always check a module in cleaner than when you checked it out.

Unfortunately, this alone does not guarantee to keep the technical debt under control. What can we do then ?

Why the boy scout rule is not enough

I can easily think of a few issues that are not covered by the boy scout rule.

It only deals with local problems

In it’s statement, the boy scout rule is local and does not address large scale design or architecture issues. Applying the boy scout rule keeps files well written, using with clear and understandable code. From a larger perspective though, it does very little or slow improvement to the overall design.

These large scale refactorings are very difficult to deal with using the boy scout rule alone. It could be done but would require to share the refactoring goal with all the team, and then track its progress, while at the same time dealing with all the other subjects of the project. That’s starting to sound like multitasking to me.

It’s skill dependent

Another point about the boy scout rule (and to be fair, about any refactoring technique) is that programmers will be able to clean the code only as much as their skills allow them to !

Imagine what would happen when a new master developer arrives in a team of juniors, he’d spot a lot of technical debt and would suggest improvements and ways to clean the code. Code that was thought of as very clean would suddenly be downgraded to junk !

The point here is that the boy scout rule cannot guarantee that you have no technical debt, because you don’t know how much you have !

That’s where the debt metaphor reaches its limits and flips to some productivity investment. By investing time to perform some newly discovered refactoring, you could get a productivity boost !

The cover of "Domain Driven Design"

Domain-Driven Design: Tackling Complexity in the Heart of Software, Eric Evans calls this knowledge distillation. He means that little by little, the team gains better understanding of the domain, sometimes going through what he calls a ‘breakthrough’. These breakthroughs often promote existing code to technical debt …

It’s context dependent

Developers alone are not the only one responsible for creating technical debt. Changes to the environment also do.

For example, if the market conditions change, and that new expectations for the product are slowly becoming the norm, your old perfectly working system becomes legacy and technical debt. As an example, let’s examine what happened to the capital markets software industry in response to the 2008 crisis.

  • The sector became a lot more regulated
  • Risk control is moving from nightly batches to real time
  • The demand for complex (and risky) contracts decreased
  • As a consequence, trading on simpler contracts exploded

All these elements combined invalidated existing architectures !

New technologies also create technical debt. Think the switch from mainframe to the web.

What do we need then ?

Should we stop using the boy scout rule ? Surely not, it would be a total non-sense. Submitting clean and readable code is a must.

But it is not enough. If you have spotted some large scale refactoring that could bring some improvement, we should do what a fund manager would do :

  1. Estimate the return on investment
  2. If it is good enough, do it now

Obviously, large refactorings should also be split into smaller value adding cost reducing items. But then what ?

The cover of "The Nature of Software Development"

In The Nature of Software Development Ron Jefferies says that we need a unique value-based prioritization strategy for everything, including technical improvements. Once you’ve got that, there’s no sense in splitting and embedding your refactoring in other tasks, this will just increase your work in progress, reducing your throughput and cycle time.

Frankly, I think that’s easier said than done. I can think of two ways :

  • As Ron Jefferies tends to say, have a jelled-cross-functional team discuss and prioritize collectively
  • As Don Reintersen advocates, use an economical framework to estimate the return on investment

At least that’s a starting point !

Is There Any Room for the Not-Passionate Developer ?

In Rework, Basecamp guys David Heinemeier Hansson and Jason Fried advise to “Fire the workaholics”, while in Zero to One Peter Thiel argues that great working conditions (as described within Google for example) result from 10x technological advantages, not the other way round.

Back in 1983, Bill Gates said :

You have to think it’s a fun industry. You’ve got to go home at night and open your mail and find computer magazines or else you’re not going to be on the same wavelength as the people [at Microsoft].

Where do we stand now ? Do you need to live and breath programming to remain a good developer ?

What about the 40h per week rule ?

Studies have repeatedly demonstrated that 40h per week is the most productive work load, but in Outliers, the Story of Success Malcolm Gladwell explains that getting fast to the 10000 hours of practice is a required road to success. As my Aïkido professor says, the more you practice, the better you get …

In Soft Skills: The software developer’s life manual John Somnez also makes the point for hard work, that while he long believed that smart work would be enough, it’s only when he put more in that he managed to drastically change his career.

During an argument, DHH argued in favor of work life balance whereas Jason Calacanis said that working in a startup had to be an all-in activity. In the end, they agreed that what matters is passion.

From my own experience, whenever I work on something I am passionate about :

  • I am more productive
  • I feel energized rather than dulled by the work

When I look around me, all the great developers I know are passionate and putting in more than 40 hours per week in programming. I also noticed that passion and efforts have always been pretty good indicators of future skills.

But then, how do passionate people manage to remain productive when working more than 40 hours per week ?

What about the under the shower idea ?

In Pragmatic Thinking and Learning: Refactor Your Wetware (Pragmatic Programmers) (which is a great book BTW), Andy Hunt explains that our R-mode works in the background, and needs time away from the task at hand to come up with “out of the box” creative solutions.

XP argues for a sustainable pace, but at the same time, Uncle Bob says that we should put in 60 hours (40 for employer, and 20 for yourself) of work per week to become and remain ‘professionals’ (I guess that’s from The Clean Coder if I remember correctly).

On my side, 6 to 8 solid hours of pair-programming on the same subject is the most I can do before becoming a Net Negative Producing Programmer. But I can do more programming per day if I work on a side project at the same time though !

I guess that’s how passionate people do it, they have different topics outside of their main work :

  • they read books about programming
  • they have their own side projects
  • they read articles about programming
  • they might maintain a programming blog
  • they might attend, organize or speak at meetup

Most of the time, this does not make for more work, but rather for more learning. If I’ve noticed that all the great programmers around me are passionate and strive to improve at their craft, I’ve also noticed that overworked workaholics usually aren’t very productive.

Special challenges for mums and dads

I think that Bill Gates 1983 statement still holds. If you are not passionate about programming, you’ll have a hard time remaining and succeeding as a programmer in the long run.

The great thing about all this passion is that we can experience an energized work environment, always bubbling with change and novelty. On the flip side, keeping up with all is not always easy.

As we developers gain more experience, we tend to loose patience with everything that just feels as a pain in the ass, and will want :

  • Powerful languages and technologies
  • An efficient working environment
  • Smart colleagues

Unfortunately, that might also be the moment in your life when you become a parent, and you’ll want a stable income to sustain your family and some time to spend with your kids.

That is when things get tricky. Neither can you jump ship for the next cool and risky startup where you’ll do great things, nor can you find enough time moonlighting to improve your skills … To add pain to injury, even with 10 years of experience in various languages and technologies, most companies won’t look at your resume unless it contains good keywords … It looks like the developer’s version of The Innovator’s Dilemna !

Lack of passion and parenthood might partially explain why people stop being developers after a while. I can quickly think of 2 bad consequences of this :

  • We tend to reinvent the wheel quite a lot (I’m looking at you, .js frameworks …)
  • We might be meta ignoring (ignoring that we ignore) people skills that could make us all more efficient

Chinese translation

How to Setup Rails, Docker, PostgreSQL (and Heroku) for Local Development ?

My current side project is an online tool to do remote planning pokers. I followed my previous tutorial to setup Rails, Docker and Heroku.

Naturally, as a BDD proponent, I tried to install cucumber to write my first scenario.

Here is the result of my first cucumber run :

1
2
3
4
$ docker-compose run shell bundle exec cucumber
rails aborted!
PG::ConnectionBad: could not translate host name "postgres://postgres:@herokuPostgresql:5432/postgres" to address: Name or service not known
...

It turned out that I had taken instructions from a blog article on codeship that mistakenly used host: instead of url: in their config/database.yml

After fixing that in my database.yml file, things where only slightly working better :

1
2
3
4
$ docker-compose run shell bundle exec cucumber
rails aborted!
ActiveRecord::StatementInvalid: PG::ObjectInUse: ERROR:  cannot drop the currently open database
: DROP DATABASE IF EXISTS "postgres"

The thing is the config was still using the same database for all environments. That’s not exactly what I wanted. I updated my config/database.yml :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
default: &default
  adapter: postgresql
  encoding: unicode
  pool: 5
  timeout: 5000
  username: postgres
  port: 5432
  host: herokuPostgresql

development:
  <<: *default
  database: planning_poker_development

test: &test
  <<: *default
  database: planning_poker_test

production:
  <<: *default
  url: <%= ENV['DATABASE_URL'] %>

Victory ! Cucumber is running

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ docker-compose run shell bundle exec cucumber
Using the default profile...
0 scenarios
0 steps
0m0.000s
Run options: --seed 45959

# Running:



Finished in 0.002395s, 0.0000 runs/s, 0.0000 assertions/s.

0 runs, 0 assertions, 0 failures, 0 errors, 0 skips

Fixing rake db:create

By searching through the web, I found that people were having similar issues with rake db:create. I tried to run it and here is what I got :

1
2
3
$ docker-compose run shell bundle exec rake db:create
Database 'postgres' already exists
Database 'planning_poker_test' already exists

Why is it trying to create the postgres database ? It turns out that the DATABASE_URL takes precedence over what is defined in my config/database.yml. I need to unset this variable locally. I already have the docker-compose.override.yml for that :

1
2
3
4
5
6
7
8
9
web:
  environment:
    DATABASE_URL:
  ...

shell:
  environment:
    DATABASE_URL:
  ...

Rake db:create works just fine now :

1
2
3
$ docker-compose run shell bundle exec rake db:create
Database 'planning_poker_development' already exists
Database 'planning_poker_test' already exists

Starting a psql session

During all my trouble-shootings, I tried to connect to the Postgresql server to make sure that the databases where created and ready. Here is how I managed to do that :

1. Install psql client

On my Ubuntu machine, that was a simple sudo apt-get install postgresql-client-9.4.

2. Finding the server port

The port can be found through config/database.yml or through docker ps. Let’s use the later, as we’ll need it to find the server IP as well.

1
2
3
$ docker ps
CONTAINER ID        IMAGE            COMMAND                  CREATED             STATUS              PORTS           NAMES
b58ce42d2b2b        postgres         "/docker-entrypoint.s"   46 hours ago        Up 46 hours         5432/tcp        planningpoker_herokuPostgresql_1

Here the port is clearly 5432.

3. Finding the server IP

Using the container id we got on previous docker ps command, we can use docker inspect to get further details :

1
2
3
4
$ docker inspect b58ce42d2b2b | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.2",
                    "IPAddress": "172.17.0.2",

4. Connecting to the database

Connecting is now just a matter of filling the command line.

1
2
3
4
5
$ psql -U postgres -p 5432 -d planning_poker_development -h 172.17.0.2
planning_poker_development=# select * from schema_migrations;
 version
---------
(0 rows)

5. Installing psql client directly in the shell

It should be possible to install the psql client in the shell container automatically, but I must admit I did not try this yet. It should just a matter of adding this to the Dockerfile

1
RUN apt-get install postgresql-client-<version>

How to Boot a New Rails Project With Docker and Heroku

A few years ago, I used Heroku to deploy my side-project. It provides great service, but I remember that updates to the Heroku Stack was a nightmare … Versions of the OS (and nearly everything) changed. The migration was a matter of days, and while doing a side-project, this was difficult. At the time, I remember thinking that using branches and VMs would have been the solution.

Now that I started to use Heroku again, I decided to use Docker from the beginning. More specifically, I am expecting :

  • to have a minimal setup on my host machine
  • to use the same infrastructure in dev than in production
  • to simplify switching to a new machine
  • to simplify the migration to the next Heroku stack

As an added benefit, if ever someone else joins me in my side-project, it will be a matter of minutes before we can all work on the same infrastructure !

Heroku provides a tutorial about how to deploy an existing Rails app to heroku using containers. Unfortunately, I did yet have an existing rails app … So the first challenge I faced, was how to create a Rails app without actually installing Rails on my machine. The trick is to bootstrap rails in docker itself before packaging all this for Heroku.

1. Install the required software

I installed only 4 things on my host machine – Docker instructions – Docker Compose instructions – Heroku Toolbelt instructions – Heroku container plugin heroku plugins:install heroku-container-tools

That’s all I changed to my host machine.

2. Setup docker

First, let’s create a new dir and step into it. Run :

1
2
mkdir docker-rails-heroku
cd docker-rails-heroku

To prepare the Heroku setup, create a Procfile

1
web: bundle exec puma -C config/puma.rb

and app.json

1
2
3
4
5
6
7
8
{
  "name": "Docker Rails Heroku",
  "description": "An example app.json for container-deploy",
  "image": "heroku/ruby",
  "addons": [
    "heroku-postgresql"
  ]
}

To generate docker files for Heroku, run :

1
heroku container:init

You want to run Rails in dev mode locally, so we need to override Heroku’s default env (Check my previous post for details)

Create an .env file

1
RAILS_ENV=development

and docker-compose.override.yml

1
2
3
4
5
6
7
8
9
web:
  volumes:
    - '.:/app/user'
  environment:
    RAILS_ENV: "${RAILS_ENV}"

shell:
  environment:
    RAILS_ENV: "${RAILS_ENV}"

3. Create the Rails app

It’s now time to follow the official docker-compose rails tutorial to bootstrap the rails app and directories :

Change Dockerfile to

1
2
3
4
5
6
7
8
9
10
# FROM heroku/ruby

FROM ruby:2.2.0
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /myapp
WORKDIR /myapp
ADD Gemfile /myapp/Gemfile
ADD Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
ADD . /myapp

Create a bootstrap Gemfile with the content

1
2
source 'https://rubygems.org'
gem 'rails', '4.2.0'

Bundle install within the container requires a existing Gemfile.lock

1
2
# Create an empty Gemfile.lock
touch Gemfile.lock

It’s now time to build your docker container to be able to run rails and generate your source files. Run the following :

1
2
3
4
5
# Build your containers
docker-compose build

# Run rails within the shell container and generate rails files
docker-compose run shell bundle exec rails new . --force --database=postgresql --skip-bundle

Unfortunately, rails is ran as root inside the container. We can change ownership and rights with this command :

1
2
3
4
5
# Change ownership
sudo chown -R $USER:$USER .

# Change rights
sudo chmod -R ug+rw .

4. Make it Heroku ready

Now that the rails files are generated, It’s time to replace the bootstrap settings with real Heroku Dockerfile

Revert Dockerfile to simply :

1
FROM heroku/ruby

Heroku uses Puma so we need to add it to our Gemfile

1
2
# Use Puma as the app server
gem 'puma', '~> 3.0'

We also need to add a config file for Puma. Create config/puma.rb with this content (you can check heroku doc for details)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count

preload_app!

rackup      DefaultRackup
port        ENV['PORT']     || 3000
environment ENV['RACK_ENV'] || 'development'

on_worker_boot do
  # Worker specific setup for Rails 4.1+
  # See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
  ActiveRecord::Base.establish_connection
end

It should now be possible to rebuild the container, and run the app :

1
2
3
4
5
# Rebuild the containers
docker-compose build

# Start the rails app using the web container
docker-compose up web

The app should be accessible at http://0.0.0.0:8080

5. Deploying to heroku

We’re almost ready to deploy to heroku.

First, we need to exclude development files from our image. For this, we need to create a .dockerignore file with the content

1
2
3
4
5
6
7
8
9
10
.git*
db/*.sqlite3
db/*.sqlite3-journal
log/*
tmp/*
Dockerfile
.env
docker-compose.yml
docker-compose.override.yml
README.rdoc

It’s then classic Heroku deploy commands :

1
2
3
4
5
# create an Heroku app
heroku apps:create <your-app-name>

# And deploy to it
heroku container:release --app <your-app-name>

Your app should be accessible on line at https://<your-app-name>.herokuapp.com/

Rails does not provide a default homepage in production. But you can check the logs with

1
heroku logs --app <your-app-name>

6. Running commands

When in development mode, you might want to run rails or other commands on your source code again. The shell container exists just for that, run docker-compose run shell ....

1
2
# For example, to update your bundle
docker-compose run shell bundle update

EDIT 2016-07-20

For the moment, there’s a catch with bundle install or update commands, as the gems are installed outside the shared volume, only Gemfile.lock will be updated, which required to run docker-compose build again … I’ll have a look into this later and see if I can fix that.

1
2
docker-compose run shell bundle update
docker-compose build

Docker Compose Trick : How to Have an Overridable Environment Variable in Development Mode ?

I have recently been playing with Docker and Docker Compose while starting my new side project. I’ve fallen into a situation where my production container uses a value for an environment variable, but while developing, I’ll need both a different default and the ability to override this value.

I’m using Rails and found various references about how to deploy Rails app using Docker, but in the end, I decided to use Heroku which handles a lot of ops for me. Rails uses the RAILS_ENV environment variable to know if it’s going to run in development, test or production mode. The heroku/ruby image sets RAILS_ENV=production, but we usually want to use RAILS_ENV=development locally. I could have overridden RAILS_ENV in a docker-compose.override.yml file, but that would prevent me from running my app in production locally.

The trick

I eventually fixed my issue with combination of 2 files.

docker-compose.override.yml

1
2
3
4
5
web:
  ...
  environment:
    RAILS_ENV: "${RAILS_ENV}"
...

.env

1
RAILS_ENV=development

The logs

My app starts in development mode by default :

1
2
3
4
5
6
7
8
9
10
philou@philou-UX31E:~/code/planning-poker$ docker-compose up web
Starting planningpoker_herokuPostgresql_1
Recreating planningpoker_web_1
Attaching to planningpoker_web_1
web_1               | Puma starting in single mode...
web_1               | * Version 3.4.0 (ruby 2.2.3-p173), codename: Owl Bowl Brawl
web_1               | * Min threads: 5, max threads: 5
web_1               | * Environment: development
web_1               | * Listening on tcp://0.0.0.0:8080
web_1               | Use Ctrl-C to stop

But I can still override RAILS_ENV to test for example :

1
2
3
4
5
6
7
8
9
10
philou@philou-UX31E:~/code/planning-poker$ RAILS_ENV=test docker-compose up web
planningpoker_herokuPostgresql_1 is up-to-date
Recreating planningpoker_web_1
Attaching to planningpoker_web_1
web_1               | Puma starting in single mode...
web_1               | * Version 3.4.0 (ruby 2.2.3-p173), codename: Owl Bowl Brawl
web_1               | * Min threads: 5, max threads: 5
web_1               | * Environment: test
web_1               | * Listening on tcp://0.0.0.0:8080
web_1               | Use Ctrl-C to stop

5 Years of Blogging About Software

5 years ago, I started blogging. I started really casually, my posts were personal reminders and notes rather than real well thought of articles. Nevertheless, it did me great good :

  • I’ve been invited to talk at meetups
  • I’ve had the joy of seeing some articles being tweeted many times
  • I received interesting job offers from all over the world

6 months ago, after reading Soft Skills: The software developer’s life manual, I set up the practice of writing at least one article per week, and here is my (very encouraging) graph of sessions since then:

Excuses Why Not To Blog

Here is a collection of the (bad) excuses you’ll often hear people say for not blogging :

I don’t know how to write …

Blogging regularly is actually a pretty good way to improve your writing skills. As usual, the key is to fake it until you make it.

I’m not into this social media stuff …

You don’t need to share anything personal on your software blog. In the end, your blog is a professional tool.

I don’t have anything interesting to say …

They are others in the same situation as you who would like to see more posts about the kind of uninteresting things you just discovered. Wouldn’t you have liked someone to have written the newby article about « how to do XXX » you just spent 3 days to crack ?

I don’t have the time …

Make it ! Time is never found, it is made. In the end, it’s just a matter of prioritization.

Obviously, there are other totally valid reasons why not to blog, but I’ll assume you’re able to recognize those.

Why Would You Blog ?

On the other side, if you jump into blogging, you can expect a lot of returns :

  • First thing is that you’ll obviously gain more visibility. I’ve got readers from all over the world, and my articles are sometimes re-tweeted many times.
  • You’ll improve your writing skills. Writing skills turn out to be unexpectedly important for software writers !
  • In order to lay down your ideas about something, you’ll need to dig a bit more into. It is said to be the last step to learning.
  • It can act as a personal documentation. I used to write mine as a how-to notepad on which I could refer later on.
  • If you have a day job, you can re-post your articles there. You should gain extra visibility and expose the company to new ideas.

How to start

Once you’ve decided that you want to blog, starting should not be an issue.

Pick a platform

There are a lot of blogging platforms out there. For programmers, I would recommend a few though :

Platform Pros Cons
Octopress Free, Open Source, Github hosting, static HTML generation, markdown & Git based, made for programmers Theming can be rocky
Medium Free, no setup, good looking, simple to use It’s a private company, so it could close some day ! It happened to postero.us (I remember, I was there …)
Posthaven Created by the founders of postero.us, sustainable, guarantees to keep it live for ever, can post by email ! Nothing special for programmers, 5$ / month
Logdown Looks like a hosted version of Octopress, without the hassle ! 50$/year

Then, it’s up to you !

Start with how-to articles

When I started my blog, it was mostly has a personal how-to reference. It allowed me to come back to it and find out how I did something last time. I thought that if it was important to me, it must be important to others as-well !

Blog regularly

Blogging every week made a huge difference to me. My traffic went from erratic to steadily increasing. I am currently observing a 11% traffic increase per month. This means that it nearly quadruples every year : I’m not going to stop now !

Integrate with the web

This boils down to social networks and analytics. Obviously, you’ll want to use Google Analytics to see how people are reading your content. I’m using the venerable Feedburner to automatically post my new articles on twitter. There’s an option to use your post categories as hashtags, be sure to make it works, it brings a lot of traffic.

It’s all up to you now !

The Size of Code

The CFO’s debt is visible in his balance sheet. The CTO’s technical debt is invisible. What about making it visible ?

Developers have an intuitive sense of the technical debt in some parts of the system. But few have an accurate estimation of its full extent. Even the size of a code base is difficult to grasp. In the same way, the size of the code is just a number. But the fact are there : between 10 000 and 10 000 000 lines of code, the rules aren’t the same, but it’s only invisible data on hard drives …

Showing It

If we had a device or a trick to show to non-developers the size of the source code, people might start to feel the embarrassment of working in a bloated code base. Unfortunately, for the moment, the only ideas I had are somehow unrealistic, albeit funny !

First Idea : Printouts

Suppose we printed all the source code every Monday, and then keep it around for everyone to feel its size. We could leave it in the middle of the place, or in the CTO’s office, so that he’d actually be hindered by the space loss. The larger the code, the bigger the troubles.

It’s possible to print 50 lines on a sheet of paper, that’s 100 on both sides. That’s 50 000 in a pack of 500 pages. And eventually, 200 000 in this kind of standard case :

Keeping these printouts in sync with the real cost would make the thing even more painful realistic. Imagine all the printings costs, and moving around cases of paper every day … ;)

Second Idea : Inflatable Device

What about an inflatable device linked to SonarQube (or any other code metrics tracking system) ? It could grow as new code is written. We could make it as large as we want : 1m3 for every 10K lines of Code, making the whole office a difficult place to walk around. Try to figure out how to work with this thing in the office :

Third Idea : Sand

For maximum pain, let’s use real sand instead of an inflatable device ! Imagine the mess with some sand lying around in the office. If the only way to clean up the mess was to clean up the code, surely everyone would take the issue seriously !

Final Word

Obviously, these are jokes, but I guess there’s a real need there. If we managed to make non developers feel the size and cost of the code base, it would be easier to agree on priorities.