Mark Gilbert's Blog

Science and technology, served light and fluffy.

Unit-Testing in Unity

CJ and I are collaborating to build an app that teaches you how to solve single-variable algebraic equations like this:

2x – 4 = 16

Our target audiences are kids and teachers, so it was obvious to us that this would be a mobile app.  For me, the choice of development platform was also a no-brainer – Unity:

  • Katherine and I had tinkered with Unity a while ago, so I was already familiar with it
  • It can generate binaries for every platform we’re considering, and then some (XBox or PS4 anyone?)
  • One of the primary programming languages supported was C#
  • Unity integrates with Visual Studio (you can also use MonoDevelop, but if you know and love VS, why?)
  • Unity has a flourishing user community
  • I could get started for free

One of the very first things I figure out with any new development language / platform / IDE is how to build unit-tests with it.  As it turns out, Unity has a built-in test-runner based on NUnit (yet another point of familiarity for me).  The runner is available under the Window menu, but normally I leave it docked on the right side of the IDE.

10

Now the next step was figure out WHERE my tests should actually live.  After a little digging I found an article on the Unity Test Runner.  The test runner has two modes – Play and Edit.  I started with Edit, and (as the article shows), there was a button for “Created EditMode Test”.  I clicked it, and it created a folder called Assets/UnitTests/Editor.  I tried renaming and moving the folders in that string, and kept on eye on the test runner – would it find my test if I put it in a folder called this?.  As it turns out, the “Editor” folder has special meaning for Unity.  I settled on a folder structure of Assets/Editor/UnitTests (inverting what Unity created by default), so I could add other Editor scripts later if I needed.

20

Now that I had the basic structure, it was time to get down to business, and start writing the logic for the game.  Fast-forward several weeks and 450+ unit tests later, and I have a few observations about unit testing in Unity.

The integration between the Unity IDE and Visual Studio is fairly seamless.  I can add a new test in the latter, switch back to the former (where it kicks off a compilation in the background automatically), and see it appear a second or two later.  In Studio, there is a preset button for attaching to Unity, allowing me to debug through my test and business logic easily.  The one minor quirk is how Unity reports compilation errors – subtly, at the very bottom of the Unity UI:

30

This stung me a couple of times – I would make some typographical error, miss this clue that my new code wasn’t actually compiling, and re-ran my unit tests expecting a change, only to find that the same test was failing because my new code wasn’t actually being used yet.  To avoid this, I now do an explicit compilation in Studio, and let Studio throw up the errors and warnings I’m used to seeing.

As a whole, the test runner does its job reliably.  One thing I’d change, though, is how it reports the current state of the tests.  If you’ve already run your test suite once, and want to run it again, Unity doesn’t clear all of the green (and red) flags for the already-run tests.  Unless something switches from red to green or vice versa, you really don’t have any indication which test is being run at the moment, how far it is through the suite, etc.  There IS a progress bar that appears while the tests are being executed:

40

And once that disappears you know it’s done, but I’d prefer the test statuses to be cleared before each run so I can watch it work through the suite again, fresh.

I’ve also noticed that the tests run much faster if you keep the focus on Unity the whole time.  More than once I would kick off a series of tests, and switch to some other window while they ran, and found my tests were still running long beyond when they should have (I’ve seen suites take 2-7 times longer to complete this way).  Again, a relatively easy issue to avoid – don’t switch away from Unity while the tests are running – but an odd one.

Overall, I’m very happy with my choice of Unity.

Advertisements

June 18, 2017 Posted by | Agile, Game - Algebra, Unity, Visual Studio/.NET | Comments Off on Unit-Testing in Unity

Test-Driven Home Repair

Over the last year, our fluorescent kitchen light was starting to show signs of wear.  Some mornings it would take several minutes to fully turn on.  In January, it started turning on and off on its own.  Then it stopped working altogether.

CJ and I opted for an LED replacement light that had the same footprint as the fluorescent – that way, any unseemly holes or scars that emerged when I took the old light down would at least be covered up when I installed the new one.

Now, does everyone remember what the first step is when working on something electrical?  Make sure you don’t have power running through the lines.

Downstairs I went, to the breaker box.  I had visions of multiple trips up and down, trying to find exactly the right breaker, but incredibly, there was one marked "Kitchen Lt".  I turned it off, and went back up.  I unwrapped the electrical leads to the light – taking care to not touch any of the exposed copper – and tested them.  The indicator light on my tester stayed dark, so that meant no power.  I can proceed, right?

Not so fast.  While I’m relatively comfortable working on the electrical fixtures in my house, I’m also fairly paranoid about it.  After all, I only do something like this maybe twice a year.  How could I tell if the power was REALLY out?

I’d turn the breaker back on, see that the light on my tester actually lit up, turn the breaker off again, and see that it went out.  In other words, test-driven home repair.  I needed to write a failing test – touch the tester to the wires and see the indicator light come up.  Then I would write code to pass that test – turn off the breaker, and the indicator light should go dark.

Another trip the breaker box.  Another trip back upstairs.  Another test of the wires.

The light on my tester was STILL dark.

Uh-oh.

Do I have a bad tester?  I plugged it into the nearest electrical outlet, and the indicator light came right on.

Um.  Now what? 

With the breaker on, there should be power running through these wires.  Is it possible that I have a break somewhere in the junction box that this light hangs from?  Is there a break in the wires leading from that junction box back to the breaker?  Suddenly, I’m feeling way less confident in my ability to switch out this light.  CJ and I discussed a couple of possibilities, but we decided that if I wasn’t confident enough to finish this job, we’d just have to call in a professional electrician.  I covered up the bare ends (again, taking great care not to touch the copper), and feeling a little dejected and more than a little puzzled.

***

After a good night’s sleep, CJ figured out the missing piece.  She caught me this morning asked, "You turned off the breaker, but did you…" – and that was all that I needed.  This is a light, Mark.  A kitchen light… with a switch of its own.

Forehead?  Meet wall.

I toggled the power at the breaker, but the light switch on the wall had been off the entire time.  OF COURSE there wouldn’t be any power running through it.

I pulled my tools back up to the kitchen; uncovered the ends; turned on the light switch.  The indicator light on my tester lit right up.  Sigh.  10 minutes later, I had the new light mounted and working*.

While this was another in a long line of "duh" moments for me in the home-improvement space, I was very glad I insisted on getting a failing test before proceeding.  In my day job, not being that disciplined means bugs or bad assumptions can make it through.  When I’m working with 110V, though…

Yeah, you get the picture.

 

* For the record: 144 LEDs are bright!  CJ says without the cover, the light makes it look like Vegas in our kitchen.  🙂

March 13, 2016 Posted by | Agile | Comments Off on Test-Driven Home Repair

Will you just wait a minute?! NUnit and Async/Await

I was being a good-doobie.  Honest.

I had written a prototype – just something to prove that a particular approach could work.  It did, and so now it was time for me to actually bring that code up to production quality.  Part of that meant modifying how I was invoking the third-party web services – they  needed to be done asynchronously.

So I went through and wrote my unit tests to also run asynchronously:

        [Test]
        public async void Divide_4DividedBy2_Equals2()
        {
            AsyncUnitTest.Math MathLibrary = new AsyncUnitTest.Math();

            float Quotient = await MathLibrary.Divide(4, 2);

            Assert.AreEqual(2, (int)Quotient);
        }

I ran it through NUnit on my machine, and everything was peachy-keen.  I ended up writing nearly 50 unit tests* like that, converting over all of my calls.  I committed the code, and let TeamCity take over.

And watched every one of those new tests break.

When I looked at the TeamCity log, the errors seemed to hint that the test runner was simply not waiting for the thing under test to complete before trying to run the asserts.  I started searching for things like "nunit async", and pretty quickly across this two-part series by Stephen Cleary:

In this series, Cleary says that the underlying problem of running async tests is that they don’t have a proper context:

We’ve encountered a situation very similar to async in Console programs: there is no async context provided for unit tests, so they’re just using the thread pool context. This means that when we await our method under test, then our async test method returns to its caller (the unit test framework), and the remainder of the async test method – including the Assert – is scheduled to run on the thread pool. When the unit test framework sees the test method return (without an exception), then it marks the method as “Passed”. Eventually, the Assert will fail on the thread pool.

His solution is to simply give the test an async context, and he provides a very handy wrapper to do just that.  I first had to install his Nito.AsyncEx NuGet package, and then wrap my test in AsyncContext.Run:

        [Test]
        public void Divide_4DividedBy2_Equals2_Asynchrofied()
        {
            AsyncContext.Run(async () =>
            {
                AsyncUnitTest.Math MathLibrary = new AsyncUnitTest.Math();

                float Quotient = await MathLibrary.Divide(4, 2);

                Assert.AreEqual(2, (int)Quotient);
            });
        }

Notice that I’ve removed the "async" keyword from the test itself; AsyncContext.Run does all the work here.  After updating and committing my first test using AsyncContext.Run – a test test, if you will – it ran successfully on TeamCity.  I updated the other 48, and finally got a green build.

***

My build was stable again, but Cleary’s explanation didn’t answer the question of why this worked on my machine in the first place – without using his very awesome library – so, I kept digging.

I first looked up exactly what TeamCity was using to run the tests – it was NUnit, the same as what was on my machine, with a minor different in the version.  My local copy was 2.6.2, while the version on the build server was 2.6.1.  Could there be a difference in how 2.6.1 was handling async?

Why yes.  Yes there was.  In the NUnit 2.6.2 release notes I found this:

When running under .NET 4.5, async test methods are now supported. For test cases returning a value, the method must return Task<T>, where T is the type of the returned value. For single tests and test cases not returning a value, the method may return either void or Task.

– Source: http://nunit.org/index.php?p=releaseNotes&r=2.6.2

Are you serious?  I just happen to have the first version of NUnit that would properly handle async on my machine, but the build server was one notch older, and therefore couldn’t?  *facepalm*

To further prove that this was the real source of my issue, I installed NUnit 2.6.1 and 2.6.2 side by side on my machine.  I took my two tests from above, both of which should have tried to execute the MathLibrary.Divide function which included a 2-second delay:

    public class Math
    {
        public async Task<float> Divide(int Numerator, int Denominator)
        {
            await Task.Delay(2000);
            return Numerator / Denominator;
        }
    }

When I ran these two tests through NUnit 2.6.1, Divide_4DividedBy2_Equals2 completes in a couple hundredths of a second, while Divide_4DividedBy2_Equals2_Asynchrofied takes just over 2 seconds to complete, for a total of just over 2 seconds:

2-6-1

When I ran these through NUnit 2.6.2, EACH test takes just over 2 seconds to complete, for a total of just over 4 seconds:

2-6-2

So, I have two choices – switch my builds on TeamCity to use at least NUnit 2.6.2 to run the tests, or use Cleary’s Nito.AsyncEx library, which will allow me to leave the build server as is.  In any event, at least I have a reasonable explanation for what was happening. 

The funny thing is that it’s usually MSBuild that messes with me.  Apparently NUnit gave him the week off.

 


* Yes, I realize that by calling the service directly, this no longer counts as a "unit" test, but rather an integration test.  That distinction isn’t relevant to the issue described in this post, though, so I’m going to gloss over the mock objects in the real code.

October 23, 2014 Posted by | Agile, Visual Studio/.NET | Comments Off on Will you just wait a minute?! NUnit and Async/Await

Unit testing with DBNull

Recently, I was writing a class that would parse a DataRow (returned by a stored procedure) and construct a strongly-typed object from it.  I wanted to test the case where the stored procedure returned DBNull for one of the fields.  My first attempt at the unit test setup started out like this:

Dim DPTable As DataTable
Dim DataRowValues() As Object = {Value1, Value2, Value3, System.DBNull.Value, Value4}

DPTable = New DataTable
DPTable.Columns.Add(New DataColumn(“Col1Name”, GetType(Long))
DPTable.Columns.Add(New DataColumn(“Col2Name”, GetType(String))
DPTable.Columns.Add(New DataColumn(“Col3Name”, GetType(Long))
DPTable.Columns.Add(New DataColumn(“Col4Name”, GetType(String))
DPTable.Columns.Add(New DataColumn(“Col5Name”, GetType(String))

DPTable.Rows.Add(DataRowValues)
MyTestObject = New MyClass(DPTable.Rows(0))

Mock up a table and add a test row to it.  Simple.  The problem was the compiler complained about trying to directly assign System.DBNull.Value in the Object() array: "Value of type ‘System.DBNull’ cannot be converted to ‘String’." 

Ok, so DBNull isn’t a value you can assign to a .NET object. What you CAN do, however, is use a valid value in the Object() array and then assign DBNull to that cell in the DataRow, but after you’ve added it to the DataTable:

Dim DataRowValues() As Object = {Value1, Value2, Value3, SomeValidValue, Value4}

DPTable.Rows.Add(DataRowValues)
DPTable.Rows(0).Item(“Col4Name”) = System.DBNull.Value
MyTestObject = New MyClass(DPTable.Rows(0))

That worked like a charm.

October 13, 2009 Posted by | Agile, Visual Studio/.NET | Comments Off on Unit testing with DBNull

Subversion stepping on its own toes

Last week I hit an interesting issue with one of the projects running under our CruiseControl.NET build server. Before I get into the specific, I need to lay out some of the structure.

 

“He’s hitting me!”

Nearly all of the public-facing sites that my company builds have some element of Flash to them. The Flash source code and especially all of the assets that get rolled in to a finished SWF take up quite a bit of room on the hard drive.

We learned pretty early on that pulling all of the Flash source code down to the build server was a waste of time because 1) we had limited space on the hard drive and the Flash assets took up a LOT of room; and 2) NAnt wouldn’t do anything with it – the Flash team was already compiling the SWFs for us and committing them to Subversion. All we were doing was copying them (via NAnt) from the folder that the Flash developers were using and put them in the /swf folder below the site web root. So, we switched from using a single <sourceControl type=”svn” /> block to using <sourceControl type=”multi” /> (http://confluence.public.thoughtworks.org/display/CCNET/Multi+Source+Control+Block), and cherry-picking which folders in Subversion we would pull from.

1

A couple of weeks ago, one of my colleagues, Joel, suggested that we could have CCNet pull the compiled SWFs down from Subversion, and drop them directly into the /swf folder below the site root on the build server. That would save a step in NAnt, would allow the overall process to run slightly faster (by not copying the SWFs), and would help to conserve room on the hard drive (by not storing two copies of the SWFs).

2

I thought that was a good idea, and I retrofitted one of my projects to use that new process. It seemed to work great.

 

“No, me first!”

Now, as many of you know, we come to the point of the blog post where the author reveals “TheCatch”.

When I tried to use the same modified on a completely fresh project – one where none of the directory structure on the build server existed yet – CCNet through this error: “Failed to add directory ‘wwwroot’: object of the same name already exists”.

My first thought was a case-sensitivity clash. Subversion is case sensitive but Windows is not, so it’s possible to have two folders in the former such as “mybranch” and “MyBranch”, but my Subversion client will have fits when it tries to being those down to the latter. After checking the repository and the working folders on the build server, everything matched just fine. When I stopped and thought about it some more, I realized this probably couldn’t have been the cause since the build script for the new project was nearly a direct copy of one for a previous project – a previous project that had been running smoothly for weeks.

 

“Do I have to split you two up?”

I decided to fall back on a tried-and-true method of troubleshooting – comment out successive chunks of code until the error goes away. I had two <svn> blocks in the CCNet.config file, the first for populating the MyBranch/site/wwwroot working folder, and the second for populating the MyBranch/site/wwwroot/swf working folder. I removed the latter and re-ran the project. Lo and behold, it worked! I then replaced the second <svn> block and ran it again, fully expecting it to fail again. Shenanigans – it STILL worked! I blew away everything under the MyBranch in the working folder on the Build Server, and made sure I could reproduce this. Sure enough I could.

I began to suspect that the source of the problem was that I had one working folder being placed inside another. Up to this point, I had been assuming that the <svn> blocks were executed sequentially, and in the order that I placed them in the ccnet.config file. Perhaps they were being processed in reverse order. If that was the case, then the Flash source would get pulled down first, and the Subversion working folder of “swf” would get created in a “regular” folder structure of MyBranch/site/wwwroot. Then, the first <svn> block gets processed, and that tries to create a Subversion working folder for MyBranch/site/wwwroot. Since that folder already exists as a regular folder, it chokes.

When I removed the second block, the “wwwroot” folder was able to be established as a working folder. Then when I added the second block back in with “wwwroot” already there, it didn’t have a problem with adding another working folder as a subfolder.

I haven’t been able to find a way to force a specific execution order on the <svn> blocks. It may be that they get executed in reverse order every time, or that they get executed deepest-level-first, or that it’s not deterministic at all. At any rate, the hack that I settled on was running the project through once with just the first <svn> block to establish “wwwroot” as a working folder, and then adding the second block to get the rest. This is only an issue when the build project is first being configured, so I don’t feel overly terrible about handling it this way.

September 10, 2009 Posted by | Agile | Comments Off on Subversion stepping on its own toes

Unit Testing for Events

This the second of a two-part series on Unit Testing.  The first part covers testing for exceptions, while this one will illustrate events.

Here’s the scenario that I came across a week or so ago.  I have a custom class that performs a potentially long-running import function, and I wanted to communicate to the invoking code when a record was successfully imported, and when a record was rejected for some reason (invalid data, wrong number of data points, etc.).  The invoking code was actually the user form, and it was responsible for updating record stats (“Records Imported Successfully: 120”) and logging the rejected records with a meaningful error message so the user can examine them after the fact.

Up to this point, the unit tests for the Import class would pass in a set of delimited data rows, and then analyze the actual records imported or rejected.  Now what I wanted to do was to write tests for the Imported and Rejected events that the class was going to throw.  To do that, I followed the same general idea as with the unit tests for exceptions – handle the events like I would in “real” code, and make sure that they happen at the right times (or more specifically, that they happened with the expected frequency).

First, I started with a test:

Private WithEvents _MyImport As MyBusinessServices.Import
Private _ImportedCount As Integer
Private _RejectedCount As Integer

<Test()> _
Public Sub DoImport_MinRequiredFields_AllRecordsImported()
    Me._ImportedCount = 0
    Me._RejectedCount = 0

    Me._MyImport = New MyBusinessServices.Import(42, “blah”)
    AddHandler Me._MyImport.Imported, AddressOf Me.BookImported
    AddHandler Me._MyImport.Rejected, AddressOf Me.BookRejected

    Me._MyImport.DoImport

    Assert.AreEqual(3, Me._ImportedCount)
    Assert.AreEqual(0, Me._RejectedCount)
End Sub

That in turn requires me to actually define the events that we’re going to be testing.  This is done as part of my Import class (the one under test):

Public Event Imported(ByVal sender As Object, ByVal e As EventArgs)
Public Event Rejected(ByVal sender As Object, ByVal e As EventArgs)

It also required me to write the event handlers (part of the test fixture):

Private Sub BookImported(ByVal sender As Object, ByVal e As EventArgs)
    Me._ImportedCount += 1
End Sub

Private Sub BookRejected(ByVal sender As Object, ByVal e As EventArgs)
    Me._RejectedCount += 1
End Sub

All I’m doing here is counting how many records were imported or rejected given a particular data set. When my test runs and the DoImport method does its thing, I expect there to be one or more events raised.  I have event handlers wired up listening for those events and they keep track of the number of times each event is raised.  The assertions at the end of the test check those values.

As you can imagine, this is only the tip of what you can do with this general idea.  In my production code, both the Imported and Rejected event use a custom class that descends from EventArgs to pass back some additional information about the item that was just imported or just rejected, and it tests that returned data to make sure it was what I expected given the input.  I’ve removed this from the above code snippets to avoid muddling the core point (testing for events).

Additionally, the code shown here lacks much of the refactoring that the test fixture eventually went through.  Again, I’ve simplified the structure to make my point clearer.

July 8, 2009 Posted by | Agile, Visual Studio/.NET | Comments Off on Unit Testing for Events

Unit Testing for Exceptions

This is the first in a two-part series on unit testing.  I’ll be covering the structures that I use in my unit tests for custom exceptions and events.

The basic idea for exceptions is that I do something in the unit test that should cause an exception to be thrown, and then wrap that code in a Try..Catch block to check that it IS thrown.

<Test()> _

Public Sub Save_WeightNotNumeric_CorrectExceptionThrown()

    Dim MyObject As MyClass

    Dim WasCorrectExceptionThrown As Boolean

    Dim ExceptionText As String

 

    WasCorrectExceptionThrown = False

    ExceptionText = “{exception text was not set}”

 

    Try

        MyObject = New MyClass

        MyObject.Save

    Catch ex As MyClass.InvalidDataException

        WasCorrectExceptionThrown = True

        ExceptionText = ex.Message

    Catch ex As Exception

        WasCorrectExceptionThrown = False

        ExceptionText = ex.Message

    End Try

 

    Assert.IsTrue(WasCorrectExceptionThrown, “Expected exception was not thrown; this one was: “ & ExceptionText)

    Assert.IsTrue(ExceptionText.Contains(“field ‘Weight'”), “Exception thrown did not mention the correct field: “ & ExceptionText)

End Sub

The first Catch block looks for the exception that I’m expecting to be thrown, and sets WasCorrectExceptionThrown to True when it executes.  I then include a second Catch block that handles any other exceptions thrown.  In both cases I also save off the exception text for use in the Asserts.

The assertions look for two things – did the correct exception get thrown, and did that exception have the appropriate message*.  The second of these two assertions will vary greatly depending on the particular needs of the test, and in many cases may not be needed at all.

Notice also that I set the ExceptionText variable to a known value before starting – “{exception text was not set}”.  This allows the first assertion to cover both the case where the wrong (well, ok, the “expected”) exception was thrown as well as the case were no exception was thrown.  I can see which case occurred based on the error message that appears in NUnit.

NUnit does provide an ExpectedExceptionAttribute (see http://www.nunit.org/index.php?p=exception&r=2.2.10 for more information), but that only allows the test to see if the correct exception was thrown.  So far as I can tell there isn’t a way to inspect the message that comes back to make sure that it’s correct.

Next we’ll look at testing for custom events.

 

*You might ask “Mark, why are you passing information back in the message?  If you’re going to go to the trouble to build custom exceptions, why not build a new custom exception for every exceptional case?”  In a lot of cases I do, but in the cases where I am validating a slew of input parameters before doing some processing and I want to communicate when one doesn’t contain valid data, I’ll throw something like an InvalidDataException, and pass back the name of the field that was invalid.  I tend to be lazy and will write these messages to be appropriate for the user, so class that catches the exception can simply clean up what it needs to based on the type of exception thrown and then pass the message back to the UI unmodified.

July 6, 2009 Posted by | Agile, Visual Studio/.NET | Comments Off on Unit Testing for Exceptions

Kalamazoo X Conference

When I was in my fifth year in the Computer Science program at Western Michigan University (yes, my four-year degree required a five-year plan), I thought I had a good collegiate resume built up.  I was near the top of my class.  I was a better than average C hacker.  I was comfortable on SPARCs and PCs.  I was a great problem solver.

I had everything I needed to be a great software developer.  I was so sure of myself that instead of hiring on to a great company, I decided to start my own.  I didn’t want to work for a huge software shop, so why not go to the other extreme and strike out on my own?  I already had a lead for my first major customer.  Others would surely flock to me.

Ahem.  Reality has a way of setting in when you least expect it.

After two and a half years of operating as an independent contractor and being supported by my lovely wife’s salary, I decided that perhaps the independent gig wasn’t for me after all.  Having said that, I wouldn’t trade those two and a half years for anything.  That was one of the most intensive stretches of learning that I’ve done in my life.

What did I learn?  That knowing a programming language inside and out, knowing how to make Visual Studio sing, and being able to solve Millennium Problems for breakfast are necessary, but not sufficient, to make a great software developer.  (Ok, so solving Millennium Problems is probably NOT necessary, but you get the idea.)

What was missing?  What else did I do for that two and half years that I didn’t realize I was going to need?

  1. Learning to work with clients and other developers.
  2. Architecting a solution, not just writing the code for it.
  3. Being able to do a little self-promotion.

This is just a slice of the topics that we want to tackle in the first ever Kalamazoo X Conference*, taking place on Saturday, April 25 in downtown Kalamazoo.  Our goal with this conference is to complement the excellent technical conferences in the region (such as the Days of .NET and CodeMash) with sessions in human interaction, interface and graphic design, and system architecture.  We’re putting together a great lineup of speakers and are eager to get people together to talk about all of the other things needed to be a great software developer.

Hope to see you there.

 

 

* Full Disclosure: I am on the planning committee for the X Conference.

March 24, 2009 Posted by | Agile, General | 1 Comment

NAntRunner 0.3 Released

I just released NAntRunner 0.3, now available at http://CodePlex.com/NAntRunner.  The biggest single change was dropping the list box of scripts, and introducing a tree view:

NAntRunner

The top level of the tree view is a list of script “groups”.  These can be called anything you like.  Within each script group you can place one or more scripts.  When you first add a script, the name that appears in the tree view will be the file name of the script itself, for example “Primary.build”.  You can rename that to anything you like.

There are controls in the tree view context menu for adding groups, adding scripts, renaming groups/scripts, deleting groups/scripts, and running scripts.  The five buttons that appear above the tree view replicate that functionality.

I’ve also added a “Last Saved” message next to the Close button in the lower right corner.  Like its predecessors, NAntRunner 0.3 automatically saves all changes (adding or removing scripts, renaming items in the tree, resizing the utility, etc.).  The message is simply there to let you know when it happens.

Upgrading to the new version is easy.  Simply download the 0.3 ZIP, and extract it to the folder with your previous installation of NAntRunner.  The 0.3 release will automatically upgrade the settings for the previous release.  Any scripts that you had defined will be placed into a single group called “Default”.  You can move them to new groups, and rename them at that point.

I had originally planned for a feature where you could drag and drop scripts to rearrange them, or move them to new groups, but I ended up postponing it until a later release.  The tree view, and specifically being able to associate a nice display name with each script, solved the main annoyance with the 0.2 release – namely that if you had scripts at any significant depth on your file system, they became unreadable.  Dropping and adding scripts is fairly easy with this release (simply copy the path from the “Path to Script” text box, hit the Add Script button, and paste it into the dialog), so I don’t see moving scripts/groups as a huge priority right now.  If that’s a feature that you would find it hard to live without, please let me know in the comments here or on the CodePlex site.

Enjoy!

August 30, 2008 Posted by | Agile, Tools and Toys | Comments Off on NAntRunner 0.3 Released

Web Service Testing – Part 4 of 4

In this final post, I’ll discuss an inherent problem with the test suite I constructed for a web service I wrote.  The first three posts covered the basic structure of the test suite, a performance issue that I worked around, and the value that the suite brought to the project.

Each test would hit the web service twice, once to perform a search of “include by X” and the other “exclude by X” where “X” was some search criteria.  I would count the number of records returned by each and add them together.  I expected to get the total number of records.  The inherent problem with this is that all three numbers were obtained via the web service.  I am effectively testing the web service using the web service.  The tests aren’t providing an independent critique of the functionality under test.

As I thought about the problem, I came to the conclusion that the ideal way to test the service would be to insert a few test records with known values, use the web service to check that I can retrieve those records by searching on those values, and then delete the test records when the test was complete.  Assuming that my test-record-insert and test-record-delete methods were working properly, this would be an independent test of the service.  Unfortunately, I don’t have direct access to the pre-production and production databases, so my “ideal” test suite wouldn’t work in all environments.  I wanted to be able to run the majority of my suite against all environments, so that I could verify the functionality in development, and verify that the service was being deployed correctly to the other environments.

What we ended up doing to provide this independent test was to take one of our web sites that currently accesses the database and rewire it to pull data instead from the production web service.  Then, we checked the rewired site against the current production site, and compare the data being brought back.  This became our independent check on the service (which, by the way performed beautifully; no new issues were discovered in this final test).

August 5, 2008 Posted by | Agile, Visual Studio/.NET | Comments Off on Web Service Testing – Part 4 of 4