Behavior Driven Development Using Ruby (Part 3)

by Gregory Brown

We kicked off this three part exploration of Behavior Driven Development Using Ruby by diving into RSpec basics. With the knowledge of this comprehensive framework in hand, we walked through a practical example of using BDD practices to develop a simple application. By writing the specs first, we were able to use them to drive the design and ended up with nice, clean code as a result.

Of course, the more complex our needs get, the more we'll want to take advantage of advanced techniques that can help make our lives a little easier. In this final example, I'll cover a grab bag of RSpec features as well as some essential third party tools that will help make your life easier when writing your specs.

Wherever possible, I've used code based on the source package from the second part of this series. Here, I'll mainly be focusing on the specs, so please take a look back at the second article if you're curious about implementation details.

Bringing Specs Directly from the Whiteboard to the Text Editor

When writing test unit code, I've often put in tests that flunk, just to remind me to implement them later. Usually, this kind of code looks something like this:

class UserTest < Test::Unit::TestCase    

  def test_user_should_have_valid_email_address  
     flunk "write test verifying user's email address" 


This gives an output something like this:

  1) Failure:
write test verifying user's email address.

This works absolutely fine for its purpose, but definitely doesn't look like it is made for this sort of thing. RSpec handles this issue in a clever way, automatically detecting empty examples as not yet being implemented. Therefore, the same functionality could be mirrored like this:

describe "user" do    
  it "should have a valid email address" 

The output for something like this is quite nice, by comparison:

Finished in 0.03517 seconds

1 example, 0 failures, 1 pending

user should have a valid email address (Not Yet Implemented)

With this in mind, we can actually flesh out a simple outline of what tests are needed for the user interface code that I snuck into the source package untested a couple weeks ago:

require File.join(File.expand_path(File.dirname(__FILE__)),"helper")
require "#{LIB_DIR}/interface" 

describe "An interface" do

  it "should prompt for players" 

  it "should prompt for grid size" 

  it "should be able to update board display" 

  it "should display a score board" 

  it "should prompt for a players move" 


This code, when run, yields a nice report of the work to be done:


Finished in 0.010666 seconds

5 examples, 0 failures, 5 pending

An interface should prompt for players (Not Yet Implemented)
An interface should prompt for grid size (Not Yet Implemented)
An interface should be able to update board display (Not Yet Implemented)
An interface should display a score board (Not Yet Implemented)
An interface should prompt for a players move (Not Yet Implemented)

It goes without saying that any initial set of specifications is going to change drastically once you start fleshing it out, but it's really nice to be able to annotate your intentions like this, and encourages you to start writing specs right away.

Bringing Bug Reports Directly from the Tracker to Your Specs

Very few developers like "breaking the build" by checking in failing tests. The policy varies from project to project, but typically failing tests are not committed to the main line of development, or are at least commented out upon commit.

This is risky business, because it means that things can easily be forgotten. However, RSpec offers a way to mark bits of code as pending, which allows you to hide their failure messages but still have a note about them show up in your reports.

Here's a simple demonstration of how that works:

describe "the answer" do 

  before :each do
    @answer = 0

  it "should be 42" do
    pending("We need to wait 7.5 million years") do
      @answer.should == 42


When this code is run, our report looks like this:


Finished in 0.037488 seconds

1 example, 0 failures, 1 pending

the answer should be 42 (We need to wait 7.5 million years)

The interesting thing is really that when someone comes along and fixes the problem, it will show up as a failure in your report, e.g., changing the setup so that @answer = 42 results in this output:


'the answer should be 42' FIXED
Expected pending 'We need to wait 7.5 million years' to fail. No Error was raised.

Finished in 0.034342 seconds

1 example, 1 failure

In more practical usage, this construct might be a good way to mark code that is broken but perhaps has a workaround for it elsewhere in your system or code that has a ticket that should be closed when the bug is fixed.

When the examples pass, a failure will show up, and this will serve as a reminder that the pending() call can be removed and that some action might be necessary based on what the change was.

Though it's probably wise not to use this feature gratuitously, it is a much safer bet than leaving some commented out code laying around to eventually be forgotten.

Pages: 1, 2, 3, 4, 5

Next Pagearrow