Continuously Deliver a Rails App to Your DigitalOcean Box Using Docker

I decided to use my latest side project as an occasion to learn Docker. I first used Heroku as a platform for deployment (see previous post). It works fine but I discovered the following shortcomings :

  • Heroku does not deploy with Docker, which means that I’d get quite different configurations between dev and prod, which is one of the promises of Docker :(
  • The dockerfile provided by docker runs bundle install in a directory outside of the docker main shared volume, this forces to do bundle update twice (once to update Gemfile.lock and a second time to update the actual gems …)

None of these issues could be fixed without moving away from Heroku.

A great Tutorial / Guide

I followed Chris Stump’s great tutorials to setup Docker for my app, to continuously integrate on CircleCI and to continuously deploy on a private virtual server on DigitalOcean.

The first 2 steps (Docker & CI) worked really out of the box after following the tutorial. Dealing with step 3 (CD) was a bit more complicated, because of :

  1. the specificities of DigitalOcean
  2. the fact that I’m a no deployment expert …

What did I need to do to make it work

Setup SSH on the DigitalOcean box

I started by creating a one-click DigitalOcean box with Docker pre-installed. That’s the moment where I had to setup SSH in order to make CircleCI deploy to my box. DigitalOcean has a guide for this, but here is how id did it :

  1. Create a special user on my dev machine adduser digitaloceanssh
  2. Log as this user su digitaloceanssh, and generated ssh keys for it ssh-keygen
  3. Print the public key cat ~/.ssh/ and copy paste it in your DigitalOcean box setup
  4. Print the private key cat ~/.ssh/id_rsa and copy past it in your circle-ci job ssh keys

The benefit of this is that you should now be able to ssh in your DigitalOcean box from your digitaloceanssh user ssh root@<>

Optional : update the box

The first time I logged into my box, I noted that packages were out of date. If you need it, updating the packages is a simple matter of apt-get update && apt-get upgrade

Fix deployment directory

By default, the home dir of the root user on the DigitalOcean box is /root/. Unfortunately, Chris Stump’s tutorial assumes it to be /home/root/. In order to fix that, I ssh-ed in the box and created a symbolic link : ln -s /root /home/root.

Install docker-compose on the box

Chris Stump’s tutorial expects docker-compose on the deployment box, but DigitalOcean only installs Docker on its boxes … Install instructions for docker-compose can be found here. Don’t use the container option, it does not inherit environment variables, and will fail the deployment, just use the first curl based alternative.

Warning : replace ALL dockerexample

This comes as an evidence, but be sure to replace all the references to ‘dockerexample’ to your own app name in all of Chris Stump’s templates (I forgot some and lost a few rebuilds for that)

Create the production DB

Chris Stump’s deployment script works with an existing production DB. The first migration will fail. To fix this, just do the following :

  1. ssh into the DigitalOcean server
  2. run DEPLOY_TAG=<latest_deploy_tag> RAILS_ENV=production docker-compose -f docker-compose.production.yml run app bundle exec rake db:create

You can find the latest DEPLOY_TAG from the CircleCi step bundle exec rake docker:deploy

How to access the logs

It might come handy to check the logs of your production server ! Here is how to do this :

  1. ssh in your production server
  2. run the following to tail on the logs DEPLOY_TAG=`cat deploy.tag` RAILS_ENV=production docker-compose -f docker-compose.production.yml run app tail -f log/production.log

Obviously, tail is just an example, use anything else at your convenience.

Generate a secret token

Eventually, the build and deployment job succeeded … I had still one last error when I tried to access the web site : An unhandled lowlevel error occurred. The application logs may have details.. After some googling, I understood that this error occurs when you did not set a secret key base for your rails app (details). There is a rails task to generate a token, all that was needed was to create a .env file on the server with the following :


What’s next ?

Obviously, I learned quite a lot with this Docker exploration. I am still in the discovery phase, but my planning poker side project is now continuously built on circleci, and deployed to a DigitalOcean box.

The next steps (first, find a better subdomain, second, speed up the build job) will tell me if this kind of deployment is what I need for my pet projects. If it turns out too complicated or too difficult to maintain, Dokku is on my radar.

ReXP : Remote eXtreme Programming

My colleague Ahmad from Beirut gave a talk at Agile Tour Beirut on Saturday about how we adopted XP to a distributed team at work. I gave him a hand and played the remote guy during the talk.

With me through Skype, we did a first demo of remote pair programming on FizzBuzz using IDEA and Floobits

We then did a demo of remote retrospectives using Trello

When should I use ReXP

The conclusion is that :

  • If people are spread over 2 or a few cities, and that they are enough at every place to build a teams, just build different teams at every location
  • If people are spread over a lot of places, maybe involving many time zones, then the open source, pull request based work-flow seems the best
  • Otherwise, if there are not enough people to build 2 teams, that they are spread over only a few locations, that the time difference is not too big, then stretching XP to Remote will work best

As it is said that “nothing beats XP for small collocated teams”, I guess “nothing beats ReXP for small almost collocated teams”.

Tools to make it better

As Ahmad said in his talk, tools already exist. We could add that more would be welcome :

  • Floobits or Saros help tremendously for remote pairing, but maybe cloud based IDEs like Eclipse Che or Cloud 9 will make all these tools useless !
  • Trello works well for remote retrospectives, but some great activities like the 5 whys are still difficult to do with Trello. I’m sure new tools could do better.
  • I’m currently building a remote planning poker app
  • My other colleague Morgan wants to build a virtual stand up token to make it flow

Finally, here are the slides :

3 More Great Talks From JavaOne 2016

After the top 5 talks I attended at JavaOne here are more !

Managing Open Source Contributions in Large Organizations

James Ward

This talk was very interesting for companies or organizations that want to use Open Source in some way without ignoring the risks.

After an introduction listing the benefits of contributing to open source, James explained the associated risks :

  • Security (evil contributions or information leaks)
  • Quality (bad contributions, increased maintenance or showing a bad image)
  • Legal (responsibility in case of patent infringing contribution, ownership of a contribution, licenses)

He then explained that there are 3 ways to deal with the issue :

Strategy Description Pros Cons Popularity Examples
Do nothing Devs just contribute without saying it Easy, Gets it done Need to stay under the radar, Risks for all parties are ignores +++++ Most open source code on Github is shared in this manner |
Join a foundation Joining an existing open source foundation, with a framework Everything out of the box (infra, governance), builds trust Rules can be heavy, Ownership is given to the foundation +++ Linkedin put Kafka in the Apache Foundation |
Build tools Use your own tools to mitigate the main risks associated with the ‘Do nothing’ strategy Built on top of Github, Keep control, Keeps things easy Need to develop, test and operate the tools + Demo of a tool plugged into Github to enforce a contributor license agreement for anyone pushing a pull request |

The ‘build tools’ strategy looks promising, even if it is not yet widely used !

Here are the talk and the slides on the authors website.

Java Performance Analysis in Linux with Flame Graphs

Brendan Gregg

This is what a flame graph looks like :

Technically, it’s just an SVG with some Javascript. It shows the performance big picture. It aggregates data from Linux and JVM profilers. Vertically, you can see the call stacks in your system. The larger a block, the more time is taken inside a function (or in a sub call). The top border is where the CPU time is actually taken. If you want to speed up your system, speed up the wider zones at the top of the graph.

At Netflix, the speaker is a performance engineer, and his job is to build tools to help other teams discover performance issues. This is how they use Flame Graphs :

  • Compare 2 flame graphs at different times to see what changed
  • Do a canary release and compare the new flame graph before finishing the deployment
  • Taking continuous flame graphs on running services helps identify JVM behavior like JIT or GC
  • They use different color themes to highlight different things
  • They also use them to identify CPU cache misses

By the way, I also thought this was a great example of using an innovative visualization to manage tons of data.

I could find neither the video nor the slides of the talk, but I managed to find a lot of others talks about Flame Graphs, as well as extra material on the speaker’s homepage.

Increasing Code Quality with Gamification

Alexander Chatzizacharias

You might be wondering why we should care about gamification ?

  • Worldwide 11.2 billion hours are spent playing every week !
  • People love to play because it makes them feel awesome
  • Games are good teachers
  • At work we are the ones who need to make others successful
  • But only 32% of workers are engaged in their work !

Games rely on 4 main dynamics :

  • Competition (be very careful of closed economics which can be very bad for teams)
  • Peer pressure (Public stats push teams and individual to conform to the norm)
  • Progression (regular recognition of new skills is motivating)
  • Rewards (Badges, Level ups, Monkey Money, real money …)

He went on to demonstrate two games that are based on Jenkins and Sonar that aim at better code quality :

  • One mobile app developed during a 24h Hackathon at CGI which might be open sourced at some point
  • Another one called ‘Dev Cube’ created at an university, where you get to decorate you virtual cubicle

At the end of the talk, he gave the following recommendations :

  • Understand the needs of all to respond to everyone’s personal goals
  • Don’t assign things to do, that’s not fun, give rewards instead
  • Keep managers out of the picture
  • To keep it going, you need regular improvements, special events and new rules
  • KISS !

Playing at work might not be unproductive in the end !

The same talk given at NLJug, unfortunately, it’s in Dutch, and I did not find the slides anywhere either.

Top 5 Talks I Attended at JavaOne 2016 (Part 2)

This is my second post relating the talks I attended at JavaOne 2016. Here is the beginning of the story. Here we go.

Euphoria Despite the Despair

Holly Cummins

Our jobs aren’t always fun … and that’s in fact an issue ! Studies show that people who have fun at work are 31% more productive ! The talk was organized in 3 parts :

  1. What is fun ?
  2. How to remove the parts that are not fun ?
  3. How to add even more fun ?

She defined what she called the funtinuum, which is that fun is a function of engagement and interaction. Basically, you won’t have fun if you are doing nothing, or if no one cares about your work. That aligns well with Daniel Pink’s drivers of motivation : Autonomy, Mastery and Purpose.

If something is not fun, it’s because it does not require engagement or interaction. It’s either boring or no one cares, or both. If that’s the case, it’s probably some kind of waste in some sense … Removing un-fun activities would mean removing waste. It’s interesting to note how this sounds like lean Muda) ! She gave examples such as :

  • automate stuff
  • pair programming transforms criticism into collaboration (bonus: it gives excuse to skip meetings)
  • go #NoEstimates because estimating is painful and useless
  • YAGNI defers useless things until they really add value
  • Organize to skip meetings and other boring stuff

Last step is to add fun to the workplace. She warned that adding fun before removing the un-fun stuff would feel fake and would make things worse …

To add fun, she suggested using things like :

  • gamification (there was actually another great talk about gamification)
  • build a hacking contest instead of a security training
  • Install a Siren of Shame for whoever breaks the build

Here are the slides

Java 9: The Quest for Very Large Heaps

Bernard Traversat, Antoine Chambille

This talk might not be of interest for all, but it is for us at work. It went through the improvement to come to Java 9’s G1 garbage collector. To summarize, to scale to very large heaps, it will split the memory into regions. Objects should be allocated on different regions depending on their specificities, which might help to build NUMA aware applications. Having the heap split up in smaller chunks enables the GC to run in parallel, which can speed up the old generation GC by up to 50 times !

Java 9 is scheduled for march 2017

Agility and Robustness: Clojure + spec

Stuart Halloway

I haven’t been touching Clojure for a while but I gave the language a try a few years ago. I had heard about Clojure spec but hadn’t taken the time to look at it in details. As I understood it all, Spec is like some sort of Design by Contract on steroïds ! Clojure is not statically typed, but you can now assign spec metadata to values. A spec is roughly a predicate. By defining specs for the inputs and outputs of functions, it is possible to verify at runtime that the function is behaving correctly.

As did Bertrand Meyer in the classic OOSC2, who advised to use contracts during development only, Stuart explained that we should care about developer vs production time instead of compile vs runtime. From this point of view, it is not of great importance whether the compiler or the continuously running test suite finds an issue.

But specs are a lot more than predicates ! They can be used to :

  • enable assertions at runtime
  • generate documentation
  • generate test cases
  • generate precise call logs
  • get precise error messages
  • explore a function and see how it can be called

He went on to compare the virtues of Clojure spec with static typing (à la Java) and example based testing :

Although I don’t believe that generative testing can ever replace example based testing altogether, it certainly can help.

All in all, the presentation was insanely great and engaging. It made me seriously think of going into Clojure programming again !

Here are the slides and the the same talk at Strangeloop


Overall, JavaOne was great ! If I had the opportunity, I’d go back every year ! There was a lot of other great talks I did not write about in these 2 posts, for example :

  • Development Horror Stories was a lot of fun, especially the winning story !
  • Hacking Hiring was full of good advises
  • Managing Open Source Contributions in Large Organizations was full of good ideas
  • Increasing Code Quality with Gamification was very inspiring

Edit 17 October 2016

I summarized 3 others JavaOne talks here.

Top 5 Talks I Attended at JavaOne 2016 (Part 1)

With a few other colleagues, I had the chance to be sent to San Francisco last week to attend the JavaOne 2016 conferences by my company.

Here is super short list of the conferences I attended which I found really interesting

Preventing errors before they happen

Werner Dietl & Michael Ernst

Since Java 6, it is possible to pass custom annotation processors to javac. Since Java 8, it is possible to add annotations to types. The guys behind the Checker Framework used this to create custom pluggable type systems for your Java programs. These type systems enforce properties on your program, and will emit warnings or errors at compile time otherwise.

Here are a few example :

  • declare @Immutable MyObject myObject to make sure that myObject won’t be muted
  • declare @NonNull MyObject myObject to make sure that myObject is never null

Under the hood, the compiler behaves as if @Immutable MyObject and MyObject where completely separate types, and it knows and tracks specific ways of converting between the two. The framework provides a simple API to define your own type systems. They did a live demo showing how to quickly define things like @Regex String, @Encrypted String or @Untainted String (which forbids user input strings to avoid SQL injections).

The talk was really interesting, the framework seems lightweight and to integrate well with the typical tool stack. I definitely will give it a try the next time I have a bit of slack time.

Here are the slides and a previous session of the presentation

Keeping Your CI/CD Pipeline as Fast as It Needs to Be

Abraham Marin-Perez

Continuous Delivery and Microservices are what you need to do, aren’t they ? Well, when actually trying to setup a CI / CD pipeline for all your code, things quickly get complicated pretty fast ! The speaker presented how to deal with this complexity by using metrics from your VCS and build servers to draw an annotated graph of your build pipeline.

  • He used the build time to set the size of every node : the longer, the larger
  • The color for the change rate : the more often it was built the warmer the color

It was then possible to determine other metrics such as :

  • the impact time of every node : build time + build time of all the dependencies
  • the weighted impact time : impact time * change rate
  • the overall average impact time : sum of all the weighted impact times
  • the overall max impact time : max of all the impact times

Using this and your SLAs it is possible to define policies for your build times such as “the max build time should not be more than X”. If you want to speed up your build, you can set a target build time and analyzing the graph should help you to understand what architecture changes you need to make to your system in order to meet this build time (this sounds a lot like Toyota’s Improvement Kata …)

I loved this talk ! I found the speaker captivating, he presented novel ideas which is not always the case.

Here are the slides, and the same presentation at Devoxx UK.

To Be Continued

I promised 5, and that’s only 2 talks ! Stay tuned, I’ll write about the 3 others in the coming weeks. Here they are.

Flavors of TDD

During the years doing some coding dojos with the same circle of people, I came up with my own style of practicing TDD. Lately, I had the chance to do a pair programming session with someone I did not know. That made me realize that they are in fact even more ways to practice TDD than I thought.

Mockist vs Classisist

A lot has already been written (and discussed) about these two approaches. I myself have already blogged about the subject, I even gave a talk about it. From my own point of view, I believe that the inconvenients of making mocking the default far outweights the benefits. I’m not saying that mocks aren’t useful from time to time, but rather that they should remain the exception.

Top-Down vs Bottom-Up

That’s the reason why I wrote this post. This is the main difference I found between my style and my pair’s. Let me explain.


Doing TDD top-down means starting with high level end to end tests, implementing the minimum to make it work, refactor and repeat. A bit like BDD, the point is to focus on the expected behavior, and avoid writing useless things. The bad point is that the refactoring part can get pretty difficult. On real life code, strictly following top-down would mean writing a feature test first, passing it with a quick and dirty implementation, to then spend hours trying to refactor all that mess … good luck !

Here is another example, from coding dojos this time. Having had success with the top-down approach during previous dojos, we once intentionally tried to code Conway’s Game of Life using a top-down approach. We did so by writing high level tests that were checking special patterns (gliders …). That was a nightmare ! It felt like trying to reverse engineer the rules of the game from real use cases. It did not bring us anywhere.


At the other side of the spectrum, you can do bottom-up TDD. This means unit testing and implementing all the small bricks you think you’ll need to provide the expected overall feature. The idea is to avoid tunnels and to get fast feedback on what you are coding. The bad side is that you might be coding something that will end up being unnecessary. Be careful, if you find yourself spending a lot of time building up utility classes, you might be doing too much bottom-up implementation.

The Numerals to Romans Kata is a good exercise to fail at bottom-up. Every time I did this exercise during a coding dojo, people new to it would start to come up with complicated ways to do it (often involving complex array manipulation). Compared to that, applying disciplined bottom-up TDD brings a brutally effective solution for Numerals to Romans.

Mixed approach

Both approaches have their pros and cons. I really believe developers who are serious about TDD should master both, and learn when to apply each. In fact, as often, the best approach lies somewhere in the middle. Here’s my recipe :

  1. Start with a high level feature test
  2. try to make it pass …
  3. … (usually) fail
  4. rollback or shelve your test and draft implementation
  5. build a brick
  6. unshelve
  7. try to make it pass …
  8. … and so one until the high level test finally passes.

In fact, it’s a lot like the Mikado Method for building features instead of refactoring.

Practice in dojos

It’s possible to intentionally practice this in coding dojos as well. Most kata should be OK, as long as the group agrees to fix it using this particular approach up front.

If during the dojo, you’ve just written a test, suddenly realize that it won’t be easy to get it passing, and that you’ve got the elements spread out in your code, this is the time ! Comment the test, get the green bar, refactor, uncomment the test, try to make it pass, repeat … Eventually, you’ll have all the bricks to make it easy to pass your test.

Some might say this is not ‘pure’ TDD, but that sounds like cargo cult to me ! As long as you make sure you are not building useless stuff, and that you keep the feedback loop as short as possible, you’re on the right track.

How to Use Hackernews and Reddit for Blogging

A few weeks ago, I posted my latest article Is There Any Room for the Not-Passionate Developer ? on Hackernews and Reddit Programming. The post stayed on the fist page for a while, and I got a lot of traffic. If you are yourself blogging, you might be interested to know how it occurred, and what I learned in the process.

How it started

In Soft Skills, the software developer’s life manual John Somnez explains that posting your blog articles on HN or Reddit might bring you a ton of traffic, but that comments can be hard to swallow at time. Within a few hours of writing my blog post it had generated some positive activity on twitter (favorites and retweets) from my regular followers. That’s a good sign that the post is good enough. As I had promised myself in such case, I submitted the post to both HN and Reddit.

What happened ?

I don’t know for sure on Reddit, but I know my post stayed on the first page of HN for a few hours, it even went up to the third place for a while. In the process I got a lot of traffic, a lot more than I am used to. I also got a ton of comments, on HN, Reddit and directly on my post. John Somnez had warned that comments on HN and Reddit can be very harsh, so I went through these quickly, took notes about the points that seemed interesting, but I only responded to comments on my website.

Overall, the comments were pretty interesting though, and brought a lot of valid points. I’m planning to write a ‘response’ article to take all these into perspective.

Most of the traffic was made in the day I submitted my post, but I had more traffic than usual for 2 or 3 days. Since then, the traffic has settled down, but I now get between 2 and 5 times more traffic than I typically had on a daily basis ! An online Taiwanese tech magazine also asked me the permission to translate my post in Chinese !

I’m not sure about the performance of my website during the traffic spike. I’m using Octopress to statically generated html on Github Pages, so that should be fine. I am also using a custom domain, and I need to make sure my DNS is correctly configured for this to perform well.

Advice for bloggers

So here is what I am going to do regarding HN and Reddit in the future :

  1. It can bring so much traffic and backlinks that I’ll definitely continue to submit blog posts from time to time
  2. For the moment, I’ll stick to only submitting the articles from which I already received good feedbacks, I don’t want to get a bad karma or reputation on these websites
  3. I might submit old articles that gathered good reviews at the time I wrote them
  4. Concerning comments, I’ll try to grow an even thicker skin. Maybe at some point I’ll try to answer on HN or Reddit

Of course, depending how this works, I will adapt !

Kudo Boxes for Kids

How do you get your kids to participate with housekeeping ? I guess that’s the dream of all parents. As so, we’ve tried quite a lot of tactics throughout the years. Carrots and stick never really worked, so we tried positive reinforcement, gratitude … Unfortunately, nothing really made any noticeable improvement.

Until now !

At work, we’ve been using Kudo Boxes for a while now. A kudo box is a small mailbox where teammates can drop a word of thank or some praise (No blame allowed here !)

Why not try the same thing at home ? During the summer holidays, we’ve build kudo boxes for everyone in the family.

It’s a nice and easy way to express gratitude for any good stuff our kids do. The great thing is that it’s cheap, it’s easy to carry cards around and to hand one out at any moment.

What happened ?

First, we now have very joyful kudo reading sessions : our kids rush to the boxes to check for new cards. The second most noticeable change we observed is that they are both participating more in the house chores ! For example, as soon as we start cooking, they might spontaneously dress the table up. Or they might bring tools to help us as best as they can when we are tending to the garden.

To summarize, it seems it brought a lot of joy and love in the house.

How we started ?

They are many ways to build a kudo box. The simplest way might be to get an old shoe box, and to cut a hole in the cover. We bought a wooden box with four drawers, and spent some time all together to decorate it. This in itself was already fun.

We started using simple pieces of paper as kudo cards, but I later downloaded and printed a bunch of official kudo cards from the Management 3.0 website. It turns out there is a version in french.


An unexpected, but great, side effect is that my spouse and I started to get kudos as well ! It’s really nice to receive a word from your kids. For example, here is a drawing I got from my daughter.

RSpecProxies Now Supports . To Receive(xxx)… Syntax

Pure mocks are dangerous. They let defect go through, give a false sense of security and are difficult to maintain.

I’ve already talked about it before but since then, DHH announced that he was quitting TDD, the Is TDD Dead ? debate took place, and the conclusion is that mockist are dead.

They are still times when mocks feel much simpler than any other things. For example, imagine your process leaks and crashes after 10 hours, the fix is to pass an option to a thirdparty, how would you test this in a fast test ? That’s exactly the kind of situation where using test proxies saves you from mocks. A test proxy defers everything to the real object but also features unintrusive hooks and probes that you can use in your test. If you want a code example, check this commit, where I refactored a rails controller test from mocks to a RSpecProxies (v0.1).

I created RSpecProxies a while ago, a while ago, and it’s syntax made it alien to the RSpec work, it needed an update. RSpec now supports basic proxying with partial stubs, spies, the and_call_original and the and_wrap_original methods. RSpecProxies 1.0 is a collection of hooks built on top of these to make proxying easier, with a syntax that will be familiar to RSpec users.

Before original hook

This hook is triggered before a call a method. Suppose you want to simulate a bad connection :

it 'can simulate unreliable connection' do
  i = 0
  allow(Resource).to receive(:get).and_before_calling_original { |*args|
    i += 1
    raise if i % 3 == 0

  resources = Resource.get_at_least(10)

  expect(resources.size).to eq(10)

After original hooks

RSpecProxies provides the same kind of hook after the call :

it 'can check that the correct data is used (using and_after_calling_original' do
  user = nil
  allow(User).to receive(:load).and_after_calling_original { |result| user = result }

  controller.login('joe', 'secret')

  expect(response).to include(user.created_at.to_s)

Here we are capturing the return value to use it later in the test. For this special purpose, RSpecProxies also provides 2 other helpers :

# Store the latest result in @user of self
allow(User).to receive(:load).and_capture_result_into(self, :user)

# Collect all results in the users array
users = []
allow(User).to receive(:load).and_collect_results_into(users)

Proxy chains

RSpec mocks provides the message_chain feature to do build chains of stubs. RSpecProxy provides a very similar proxy chain concept. The main difference is that it creates proxies along the way, and not pure stubs. Pure stubs assume that you are mocking everything, but as our goal is to mock as little as possible, using proxies makes more sense.

When using a mockist approach, the message chain is a bad smell because it makes your tests very brittle by depending on a lot of implementation. In contrast, proxy chains are meant to be used where they are the simplest way to inject what you need, without creating havoc.

For example, suppose you want to display the progress of a very slow background task. You could mock a lot of your objects to have a fast test, of if you wanted to avoid all the bad side effects of mocking, you could run the background task in your test, and have a slow test … Or, you could use a chain of proxies :

it 'can override a deep getter' do
  allow(RenderingTask).to proxy_message_chain("load.completion_ratio") { |e| e.and_return(0.2523) }

  expect(response).to include('25%')

Here the simplest thing to do is just to override a small getter, because from a functionnal point of view, that’s exactly what we want to test.

Last word

The code is on github, v1.0.0 is on rubygems, it requires Ruby v2.2.5 and RSpec v3.5, the license is MIT, help in any form are welcome !