Mark Gilbert's Blog

Science and technology, served light and fluffy.

Web Service Testing – Part 3 of 4

In part 1 of this series, I described how I structured the test suite for a web service that I was building.  In part 2, I described an interesting performance issue that cropped up, and how I worked around it.  In this post, I’ll talk about the value that the test suite brought to the project.

When I started writing the test suite, I intentionally designed it so that it could be run against the development, staging, pre-production, and production web services.  I wanted to have some level of confidence that everything that should be been deployed was actually getting deployed, and deployed correctly.  The test fixtures were all driven off of a single setting in the app.config that contained the URL for the web service to run against (this was actually a built-in option for web references in Visual Studio, to make the URL “dynamic” instead of static).

All of this meant that regression testing was ridiculously easy.  Push code to new server or push an update to an existing server, change the value in the app.config, and run the test suite.  I actually maintain all five URLs in the app.config (the four above, and then my local development copy) in the comments; when it came time to change which copy of the service the test suite was to run against, I just copied the correct URL out of comments, and pasted over the “real” line in the config file.  Over the course of the project, I probably ran the full test suite 50 times.  Because it was automated, it ran the same way every time, and it ran unattended.

Whatever time I spent building the test suite was paid back many times over.  When it came time to move the code to pre-production for the first time, the suite proved itself once again and in a very interesting way.

Development and staging were environments that my company managed, while pre-production and production were managed by the client (we had no direct access to the web or database servers there).  In order to prevent problems with deployment, we’ve tried to mirror our development/staging environments as closely as possible to the client’s, including operating system, database, and application patch levels.

If you recall from the first post in this series, each test in the suite would run two searches against the web service: “include by X” and “exclude by X”.  While the first may not return any records, the second should always return at least one (there wasn’t a single filter option that would exclude every possible item).  When I ran the test suite against the pre-production web service for the first time, I found a specific set of the tests were failing.  In particular, the “exclude” tests were returning 0 records.  That should never happen, not with how the tests and the web service were constructed.  Minimally, this wasn’t a problem that had occurred on our servers.

After a day or so of tinkering (mostly to make sure that the problem wasn’t in my test suite, or in the web service code itself), I contacted the client and asked them to run a piece of SQL against the pre-production database.  The SQL snippet was the core of the “exclude by” search, and sure enough, my contact said the query returned 0 results.  I asked him to strip out the entire WHERE clause, and the query dutifully returned the entire data set.

At this point, we organized a conference call (he brought in another developer on his side; I roped in my project manager who happened to have been an Oracle DBA in a past life, as well another of our developers who had more Oracle experience than I did), and started picking the SQL apart to try to narrow down the problem.  As it turns out, there was one particular primary key field that when we included in a sub-select and performed a “not in” statement on it, failed to return any records.  There didn’t seem to be any good reason why it would fail, but it was doing so nonetheless.  We were all stumped.

In the end we found a way to reorganize the query to work around the problem.  This was a rather obscure problem with Oracle, and even though our respective environments (my company’s and the client’s) were close, they apparently weren’t identical.  I doubt that we would have found this problem without the aid of the test suite; we definitely wouldn’t have found it as quickly.  Chalk up another for automation.

In the final post I’ll address an inherent problem with the test suite, and how we decided to shore it up.

Advertisements

July 27, 2008 - Posted by | Agile, Visual Studio/.NET

Sorry, the comment form is closed at this time.

%d bloggers like this: