Mark Gilbert's Blog

Science and technology, served light and fluffy.

Web Service Testing – Part 1 of 4

One of my projects currently involves writing web services that expose a key database for our client.  One of my goals as the lead developer was to create a test suite that could be run against the development, staging, pre-production, and production versions of these services.  I wanted this suite to exercise a large portion of the web service code, but I was constrained by the fact that I wouldn’t have access to the pre-production or production databases to write test data into it (so that my tests could expect that test data coming back from the service).  Even if I had this level of access, I would be very leery about doing so, especially with the production database.  I had to find a different way to test the web services.

In this blog post and the three subsequent ones, I’ll explain the testing solution that I eventually settled on, an interesting performance issue that we stumbled upon using this test suite, and the value that the test suite has brought to the project.  I’ve used VB.NET and NUnit for all of this testing and my nomenclature in these posts reflects those implementation decisions.

Both web services each have a main search feature allow the client code to specify one or more filters to include records by or exclude records by.  This allows the client code to define quite specifically what data the service should return.  Because this is the most complex of the web service methods, and because it is the one that is expected to get the most usage in production, the lion’s share of the tests focus on this search and its components.

As I mentioned in the introduction, I couldn’t write tests that would insert a record and then check that that record would be returned by the service.  After several iterations of tests, I eventually settled on counting the number of records that were returned with a given “include by” search and counting the number returned for the equivalent “exclude by” search.  Adding those two numbers together should return the total number of records that could possibly be returned by the web service.  For example, running a search of “include all records where product name contains ‘cheese'” and “exclude all records where product name contains ‘cheese'” should be perfect inverses of each other, so their sum would be the total number of records.

This approach has two main advantages.  First, I don’t need to know how many records were returned by the include by or the exclude by searches.  I also don’t need to know how many records are in the database ahead of time (or worse, have to assume some number of records to be there).  This latter is particularly important when you consider that the four databases that these services would be exposing (development, staging, pre-production, and production) will likely have different data, and therefore a generic solution that didn’t have to be modified each time would be ideal.

Secondly, doing both include by and exclude by searches in each test will exercise a very large percentage of the code base (one of my original design goals for the test suite).

This approach does have the disadvantage of making the include by searches and the exclude by searches dependent on one another.  It is entirely possible that there is a bug in the logic somewhere that affects both, but they cancel each other out when I add the record counts together.  We’ve tried to address this possibility outside of the NUnit tests, a solution that will be described in the final post in this series.

To implement this testing solution, then:

  1. The TestFixtureSetUp method makes a web service request that asks for every record – no filters are applied at all.  The test fixture saves this number off to a Protected variable.
  2. Each test hits the web service twice: once to run the “include by X” search and a second to run the “exclude by X” search.  In both cases, the record counts are saved in Protected variables.
  3. At the end of each test a standard pair of assertions are run.  First, either the include by count or the exclude by count has to be greater than 0.  It is assumed that there is at least one record in the database that is returnable by the service, so one of the web service requests should return at least 1 record.  If not, it is assumed to be an error (such as the web service being unreachable, timing out, etc.).  The second assertion adds the two individual record counts together and compares them to the total number found by the TestFixtureSetUp method.  If the sum and the total don’t match, the assertion fails.

After writing several of these, I was able to factor out a lot of the basic logic into a “test suite base class” that the other test fixture classes inherit from.  That is the reason that several of the variables are declared as Protected (rather than as Private, for instance).  It also means that the test suites themselves have very little code, thus making it quick and easy to write new tests.

In the next post, I’ll discuss an interesting performance issue that the test suites uncovered, and how I got around it.

July 7, 2008 - Posted by | Agile


  1. […] Part 2 of 4 July 20, 2008 Posted by markegilbert in Agile, Visual Studio/.NET. trackback In Part 1 of this series, I mentioned that each test hits the web service twice – once to do an “include by X” […]

    Pingback by Web Service Testing - Part 2 of 4 « Mark Gilbert’s Blog | July 20, 2008

  2. […] Part 3 of 4 July 27, 2008 Posted by markegilbert in Agile, Visual Studio/.NET. trackback In part 1 of this series, I described how I structured the test suite for a web service that I was building.  In part […]

    Pingback by Web Service Testing - Part 3 of 4 « Mark Gilbert’s Blog | July 27, 2008

  3. […] the test suite I constructed for a web service I wrote.  The first three posts covered the basic structure of the test suite, a performance issue that I worked around, and the value that the suite brought to the […]

    Pingback by Web Service Testing - Part 4 of 4 « Mark Gilbert’s Blog | August 5, 2008

Sorry, the comment form is closed at this time.

%d bloggers like this: