Effective Testing with RSpec 3, Getting Real: Integration Specs

Notes from Effective Testing with RSpec 3, chapter 6.

Integration Specs

As we learned from previous chapters, Integration Specs test code that depends on an api or code that cannot change. I’ve been talking about how Effective Testing categorizes specs differently than I do. Here we see the model layer being considered an Integration Spec because it has a SQL dependency. We test the Ledger class’s capabilities to interact with the database.

I’m starting to see the lines between test categories. In a Rails context, our models are unit tested because we’re testing the class itself; are the validations working, can it handle currency in different formats, does the behavior change depending on state, etc. We do not test the capabilities to store data because it’s a responsiblity of the Rails Framework. Testing behavior that a language or framework provides is usually unneccessary and will bloat the test suite.

Effective Testing is saying Ledgers dependency on SQL is covered by an Integration spec. While Ledger’s behavior is tested in an Acceptence spec.

Martin Fowler has recently written about Integration Specs. It provides an in depth explanation of different ways people think of Integration Specs.

Bisect

Running specs in random order unveils problems that we haven’t thought of. RSpec provides a way to repeat the random order by providing a seed which you can pass back rspec –seed 32043. RSpec provides us another handy trick, bisect. We can set the seed flag, pass –bisect and RSpec will isolate the required tests and their order to reproduce the issue.

A cost of bloated test suites is the time and attention it takes to read through specs and understand an issue when they pop up. Using bisect we can cut out the fluff and just get to the essentials of understanding the problem. This will help reduce the cognitive load of shifting through specs.

Metatags Revisited

We’ve been using metatags to filter examples that are run. Chapter 6 demonstrates another trick. We can use metatags to run configuration steps on specific examples.

In our spec_helper we can define the following config:

  config.when_first_matching_example_defined(:db) do
    require_relative 'support/db'
  end
  

Then any spec can include :db to run require and run the code in ‘support/db’.

     RSpec.describe 'Expense Tracker API', :db do
  

This is very handy. In the past I have needed to setup a mock web request for 30% of the test suite. I ended up putting it in a before hook in the config because it was faster than updating 30% of the examples. Using metatags is a cleaner solution: you only run the code that you need and the example declares it’s dependencies instead of hiding them.

Effective Testing with RSpec 3, Testing in Isolation: Unit Specs

Notes from Effective Testing with RSpec 3, chapter 5.

Martin Fowler’s article on Unit Tests defines them with the properties:

small scope, done by the programmer herself, and fast — mean that they can be run very frequently when programming.

Effective Testing & RSpec also take this philosophy. What differentiates Unit Tests from others is the degree of isolation. If the dependencies allow, we focus on just the class. Otherwise, we may need to focus on a set of classes for a specific behavior.

This is where Effect Testing has taken a left turn. With the majority of my projects being in a Rails context I think of Unit tests as the Model layer in a MVC framework. In this chapter, we practice Unit Tests at the request/controller layer.

Typically, you test a class through its public interface, and this one is no exception. The HTTP interface is the public interface.

Excerpt From: Myron Marston, Ian Dees. “Effective Testing with RSpec 3.”

I’m pretty stuck in my thinking of what a Unit Test is. This is helping me redefine it. They’ve linked to Xavier Shay who dares to toss the templated test directory and replace them with a different structure.

Unit Tests aren’t specifically for models but any object. Integration isn’t just for controllers but code operating against code that cannot change; usually helpers and presenters. Xavier tosses Functional in favor of Acceptance.

While different, this still feels natural. What category of test you want to run will depend on where you’re working in the codebase. This categorization of tests is another mental layer of filtering, which is important. It’s how we automate a necessary level of tests while still being efficient with test driven development.

We are advised to watch Gary Bernhardt’s talk on boundaries. I wanted to include it here because Gary provides a great example of how a difficult test demonstrates a refactoring opportunity in the codebase.

Effective Testing with RSpec 3, Starting On the Outside: Acceptance Specs

Notes from Effective Testing with RSpec 3, chapter 4.

In this chapter we start by building our application guided by tests. This seemed extreme for me. Normally, I would setup the required dependencies; install libraries, setup database, and initiate the application.

Effective Testing does everything through the test file. Our first file is a test to ensure we can post an expense. It doesn’t even contain any expectations, it just assumes we have an api and we can perform a post (we do not). We then start running tests and build the dependencies based on failures.

Here’s a TDD example of vacuuming the rug.

  1. start vacuum
  2. FAIL – there is no vacuum
  3. buy vacuum
  4. FAIL – cannot find vacuum
  5. put vacuum in the closet

This goes on and on until we have an api mocked and ready to go. Throughout the chapter the focus is always on the tests. In ‘Filling In the Response Body’ we go as far as to put a hard coded response in our api. No matter what we post we will always receive { “expense_id”: 42 }. We continue to build out a test suit on this hard coded response.

Top Down Testing

There are a few names for this philosophy of testing; Top-Down, Outside-In, Discovery, or London style. The idea is that we start with the purpose of the application and work down to the details. This is accomplished by mimicking the response we’re looking for as we’ve done with { “expense_id”: 42 }.

Justin Searls, vocal proponent for Top Down has written about the benefits and provides a youtube series of it in action.

The appeal to this style of testing is that it naturally flows with our train of thought. We know the purpose, what components we need to serve that purpose, and how those components act. The perceived bottle neck with TDD is that we stay in the details too long and forget how those details talk to each other.

Top down has it’s own criticisms. It’s mock heavy which can lead to false positives, create a test suite full of objects that don’t exist, and some consider mocking a code smell.

I’m expecting that Effective Testing will address these criticisms and provide solutions to avoid them.

Notes & Observations

Effective testing instructs us to use bundle exec rspec. I prefer binstubs for the tab-complete goodness. You can generate one with bundle binstubs rspec-core from your projects root directory. Then run rspec with bin/rspec.