Mark Gilbert's Blog

Science and technology, served light and fluffy.

Reducing the Tedium: Generalized Unit Tests via Reflection

In the course of developing a new class, especially one that is tied to Castle ActiveRecord, I will usually add one or more String properties.  Unless there is some reason to make these nullable, I usually modify the getters to return either an empty string or a valid value.  Knowing that the property will only be in one of these two states makes it easier to use that property in expressions like MyObject.MyProperty.Contains("blah").  I don’t have to worry about a null reference exception here if I know MyProperty can’t return a null.

To ensure that the properties are up to snuff, I will invariably write a series of five unit tests, per property:

1) Initializing the class – property should return empty string
2) Setting the property to null – property should return empty string
3) Setting the property to an empty string – property should return empty string
4) Setting the property to some whitespace – property should return empty string
5) Setting the property to a valid value – property should return that value.

As you can imagine, writing these five for each String property gets tedious.  A couple of weeks ago, I wondered if I could automate these tests – specifically, could I write something that would automatically and dynamically check every String property on a class to make sure each one passed these five conditions?

As it turns out, the answer is a resounding yes.  Here is my NUnit test for Strings:

 

    [TestFixture]
    public class PropertyTests
    {
        Type[] _TypesToCheck = { 
                                   typeof(PropertyTestsViaReflection.NS_A.ClassA), 
                                   typeof(PropertyTestsViaReflection.NS_B.ClassB),
                                   typeof(PropertyTestsViaReflection.NS_C.ClassC)
                               };

        [Test]
        public void StringProperties_DefaultToEmptyString()
        {
            String TestValue, ClassName;
            PropertyInfo[] ClassProperties;
            Object ClassInstance;

            for (int i = 0; i < this._TypesToCheck.Length; i++)
            {
                ClassName = this._TypesToCheck[i].Name;
                ClassProperties = this._TypesToCheck[i].GetProperties();
                ClassInstance = Activator.CreateInstance(this._TypesToCheck[i]);

                System.Diagnostics.Trace.Write(String.Format("Now testing {0}...", ClassName));

                foreach (var PropertyUnderTest in ClassProperties.Where(p => p.PropertyType == typeof(String)))
                {
                    TestValue = (String)PropertyUnderTest.GetValue(ClassInstance, null);
                    Assert.IsEmpty(TestValue, String.Format("{0}.{1} did not initialize properly", ClassName, PropertyUnderTest.Name));

                    PropertyUnderTest.SetValue(ClassInstance, null, null);
                    TestValue = (String)PropertyUnderTest.GetValue(ClassInstance, null);
                    Assert.IsEmpty(TestValue, String.Format("{0}.{1} did not handle null properly", ClassName, PropertyUnderTest.Name));

                    PropertyUnderTest.SetValue(ClassInstance, "", null);
                    TestValue = (String)PropertyUnderTest.GetValue(ClassInstance, null);
                    Assert.IsEmpty(TestValue, String.Format("{0}.{1} did not handle an empty string properly", ClassName, PropertyUnderTest.Name));

                    PropertyUnderTest.SetValue(ClassInstance, "  ", null);
                    TestValue = (String)PropertyUnderTest.GetValue(ClassInstance, null);
                    Assert.IsEmpty(TestValue, String.Format("{0}.{1} did not handle a blank string properly", ClassName, PropertyUnderTest.Name));

                    PropertyUnderTest.SetValue(ClassInstance, "abc123", null);
                    TestValue = (String)PropertyUnderTest.GetValue(ClassInstance, null);
                    Assert.AreEqual("abc123", TestValue, String.Format("{0}.{1} did not handle a valid string properly", ClassName, PropertyUnderTest.Name));
                }

                System.Diagnostics.Trace.WriteLine("completed");
            }
        }
    }

First, I define a hard-coded list of classes called "_TypesToCheck".  (I did this more for convenience than anything else; at the end of this post I suggest a better way.)  For each of these types, I grab the name of the class (to be used with the error messages), the list of properties to check, and instantiate an instance of the class.

I boil the list of properties down to just the ones that are of type String, and then iterate over each of those, running my five tests.  If any of these tests fail for any of the properties for any of the classes, the test reports the failure and stops.

I pieced this method together from several sources:


In addition to checking the String properties, I also perform a couple of tests on any properties with the name "ID".  This is an ActiveRecord standard, and I want to make sure that the ID properties are 0 initially, and that it can’t be assigned a negative value (this would be just another test in the same fixture).  The basic structure is the same, but instead of looking for properties of type “String”, I boil my list down to properties with the name “ID”":

        [Test]
        public void IDProperties_DefaultTo0()
        {
            long TestValue;
            String ClassName;
            PropertyInfo[] ClassProperties;
            Object ClassInstance;

            for (int i = 0; i < this._TypesToCheck.Length; i++)
            {
                ClassName = this._TypesToCheck[i].Name;
                ClassProperties = this._TypesToCheck[i].GetProperties();
                ClassInstance = Activator.CreateInstance(this._TypesToCheck[i]);

                System.Diagnostics.Trace.Write(String.Format("Now testing {0}...", ClassName));

                foreach (var PropertyUnderTest in ClassProperties.Where(p => p.Name.Equals("ID", StringComparison.CurrentCultureIgnoreCase)))
                {
                    TestValue = (long)PropertyUnderTest.GetValue(ClassInstance, null);
                    Assert.AreEqual(0, TestValue, String.Format("{0}.{1} did not initialize properly", ClassName, PropertyUnderTest.Name));

                    PropertyUnderTest.SetValue(ClassInstance, 0, null);
                    TestValue = (long)PropertyUnderTest.GetValue(ClassInstance, null);
                    Assert.AreEqual(0, TestValue, String.Format("{0}.{1} did not handle being set to 0 properly", ClassName, PropertyUnderTest.Name));

                    PropertyUnderTest.SetValue(ClassInstance, -1, null);
                    TestValue = (long)PropertyUnderTest.GetValue(ClassInstance, null);
                    Assert.AreEqual(0, TestValue, String.Format("{0}.{1} did not handle being set to a negative properly", ClassName, PropertyUnderTest.Name));
                }

                System.Diagnostics.Trace.WriteLine("completed");
            }

        }

 

These tests certainly don’t take care of all testing for a class, but it certainly does handle most of the basics.  When I add a new class, I simply update the _TypesToCheck array to reference it.  If I add a new property to one of the covered classes, the test fixture picks up on that immediately and tells me when the property isn’t behaving properly.

This is only a first step.  I can easily see a couple of enhancements that might prove useful:

  • Instead of looking at the property name or type, attach a validator (custom or otherwise) attribute that will identify what kinds of tests to perform on that property.  If it is a String property, run the String tests; if it is a Date property, make sure the date is not DateTime.MinValue, etc..
  • Be able to decorate the classes, and then have the test fixture reflect over the entire assembly to find the classes to test.  This would replace the need for a hard-coded array of assemblies.

Enjoy!

Advertisements

July 3, 2013 Posted by | Visual Studio/.NET | Comments Off on Reducing the Tedium: Generalized Unit Tests via Reflection

Wait your turn! Async and Await

It’s all about the bubbles.  Let me explain.

I was trying to implement something that I thought would be more efficiently done asynchronously.  Since I was using .NET 4.0 with the "Async for .NET Framework 4" NuGet package, I had access to the new "async" and "await" keywords.  These were designed to make it easier to kick off an asynchronous operation.  I was about to find out, however, that these weren’t silver bullets – I still had to understand what was going on to use them properly.

For illustrative purposes only, I’ll use a boiled-down version of what I was trying to implement.  This is a simple console application that tries to do some “work” (really just some calls to Thread.Sleep()), and records the order that it implements the steps.

10

First, the simple case – good ole’ fashioned synchronous:

        private static void Option1()
        {
            Option1_Step1();
            Console.WriteLine("3");
            Option1_Step2();
            Console.WriteLine("6");
        }
        private static void Option1_Step1()
        {
            Console.WriteLine("1");
            DoSomeSychronousWork(500);
            Console.WriteLine("2");
        }
        private static void Option1_Step2()
        {
            Console.WriteLine("4");
            DoSomeSychronousWork(500);
            Console.WriteLine("5");
        }

The DoSomeSynchronousWork() routine looks like this:

        private static void DoSomeSychronousWork(int Milliseconds)
        {
            System.Threading.Thread.Sleep(Milliseconds);
            return;
        }

Which outputs the checkpoint numbers 1-6 in order:

20

My first attempt to run Step 1 and Step 2 asynchronously, however, didn’t work out as I had expected:

        private static void Option2()
        {
            Option2_Step1();
            Console.WriteLine("3");
            Option2_Step2();
            Console.WriteLine("6");
        }
        private static async void Option2_Step1()
        {
            Console.WriteLine("1");
            await DoSomeAsynchronousWork(500);
            Console.WriteLine("2");
        }
        private static async void Option2_Step2()
        {
            Console.WriteLine("4");
            await DoSomeAsynchronousWork(500);
            Console.WriteLine("5");
        }

My initial, naive interpretation of "asynchronous operations don’t block the thread" was that if I slapped the "await" keyword onto a method call, it would execute that method on a new thread, and put the rest of my application to sleep, freeing the thread that it had been on to be used by something else on the computer.  Then, when my “awaitable" method returned, the computer would wake my program back up to continue where it left off.  No blocking, right?

I’m still not certain how right this interpretation was, but it definitely was not 100%.  I doubt I even made it past the 50s.  Here is what my program was now outputting:

30

Excellent.  Let’s just execute all of that methods that we can as fast as we can, and the awaitable ones will just catch up.  Mmmm… yeah, that’s really not going to work for me.

As I dug into it, and consulted Stephen Cleary’s excellent post, "Async and Await", I managed to piece together what was going on here.

 

First, the Main() routine calls Option2(), and execution begins.

32

Control then goes to the first step – Option2_Step1().

33

Which writes out the first checkpoint.  Next, it begins execution of the asynchronous work item.

34

The DoSomeAsynchronousWork() function is as follows:

        private static Task DoSomeAsynchronousWork(int Milliseconds)
        {
            return (new TaskFactory()).StartNew(() => System.Threading.Thread.Sleep(Milliseconds));
        }

Now, the work here has only STARTED.  Sync we said wanted to await this method, execution of Option2_Step1() will now be paused, waiting for DoSomeAsynchronousWork to return.  Control will now be passed back to the calling function, Option2().

35

Where it will charge forward onto the next piece of logic, checkpoint #3.  Then it will continue on to Option2_Step2().

36

Where it will then write out checkpoint #4, and start yet another asynchronous task.

37

With that second asynchronous task started, it will immediately return to the calling function, Option2(), and continue on by printing out checkpoint #6.

38

With Option2() now complete, it returns to the Main() function, and where it prints out the "Press [Enter] to run again." prompt, and waits for the user to press "Enter".

39

A short time later, the first asynchronous task, started in Option2_Step1(), completes and execution is started back up for that method, which results in checkpoint #2 being written out.  Then finally, the asynchronous task that was started in Option2_Step2() completes, and execution is started back up for that method, which results in checkpoint #5 being written out.

***

What I came to realize was that the methods I was using "async" on – Option2_Step1() and Option2_Step2() – formed a kind of "bubble".  Things within the bubble would be executed serially, but asynchronously. 

The serial part means that checkpoint 1 would always be reached out before checkpoint 2, and checkpoint 4 would always be reached before checkpoint 5. 

The asynchronous part means when the execution reaches the first work item (between checkpoints 1 and 2) the await keyword tells .NET to start that work, AND THEN RETURN AND KEEP GOING with the rest of the program – in this case, it would return to the Option2() method, print out checkpoint 3, etc.  In other words, when things inside the bubble are paused, control is passed back, to whatever called the bubble.

This is where the "doesn’t block" facet comes into play – when something asynchronous is started .NET will return control to the calling method, putting the current method on pause (so to speak).  When the work item is finished, .NET will start that method back up again, right where it left off.

So, how can I ensure that Step1 will complete before Step2 does?  The solution I landed on was to move the asynchronous nature of this program up a level:

        private static async void Option3()
        {
            await Option3_Step1();
            Console.WriteLine("3");
            await Option3_Step2();
            Console.WriteLine("6");
        }
        private static async Task Option3_Step1()
        {
            Console.WriteLine("1");
            await DoSomeAsynchronousWork(500);
            Console.WriteLine("2");
        }
        private static async Task Option3_Step2()
        {
            Console.WriteLine("4");
            await DoSomeAsynchronousWork(500);
            Console.WriteLine("5");
        }

Here, I’ve declared Option3() – the top level function – async as well.  With that in place, Option3_Step1() and Option3_Step2() should be called serially.  As before, Main() calls Option3(), and we begin by calling Option3_Step1().

42

That immediately leads to the first checkpoint.

43

And the first asynchronous piece of work.

44

Which means execution of Option3_Step1() is now paused, and control is returned to the calling function, Option3().  However, since we’re awaiting Optino3_Step1(), control is passed out ANOTHER level, back to Main(), where it prints out the "Press [Enter] to run again." prompt, and waits for the user to press "Enter".  At that point, there is nothing more that can be done, so the entire program waits.

When our first bit of asynchronous work completes, it picks back up where it left off Option3_Step1():

45

Which means checkpoint 2 is now printed out.  That’s the end of Option3_Step1(), so it returns to Option3(), prints out checkpoint 3, and begins execution of Option3_Step2():

46

Checkpoint 4 is printed out, and then the second bit of asynchronous work is started.

47

Again, execution is paused here, and control is returned to the calling function, Option3(), and then paused there and again passed back to Main().  Since the rest of Main() has already executed, nothing more happens.  The program is simply paused until the second piece of asynchronous work is completed.

When it completes, Option3_Step2() picks back up, and checkpoint 5 is written out.

48

And then checkpoint 6.

49

Here is the output, in aggregate:

40

So, by making Option3() asynchronous as well, we eliminate most of the unexpected behavior – things, for the most part, happen in order*.  In fact, doing this drives home another point that Cleary made in an MDSN article titled "Best Practices in Asynchronous Programming": trying to mix synchronous and asynchronous code is tricky at best.  It’s better to make something asynchronous "all the way down".

The full solution for the sample used here can be found in the AsyncAndAwait.zip archive at http://Tinyurl.com/MarkGilbertSource.  You’ll need Visual Studio 2012 Update 3 to run it (I built it using the Express version).

 

 

* For the purposes of this demonstration, the “Press [Enter] to run again.” is executing out of order.  In my real program, the Main() routine didn’t have anything else to do other than wait for the steps to complete, so on the surface appeared a lot less weird.  Control was still being passed back to the Main() however, and I would have to be very careful about adding anything to my program.

July 1, 2013 Posted by | Visual Studio/.NET | Comments Off on Wait your turn! Async and Await

How I got my groove back – Music Files, Playlists, and the Sansa Clip

Before a couple of months ago, I had only really been using my MP3 player, a Sansa Clip, to listen to music while I was at work, but then I started finding other uses for it.  For example, I can connect it as an input to my guitar amp, and then play along with whatever song I cue up. I also found myself plugging it in at home, finding it far easier to use than Windows Media Player (WMP).

WMP works fine for playing music, but managing my collection is another matter.  I’d drop a new MP3 into a folder, and then fight for 15 minutes with WMP to get it to actually recognize it.  Sometimes it would appear under "Songs" but not "Albums".  Sometimes I’d drag it into a playlist, only to have it get duplicated.  Sometimes the file wouldn’t sync to my player at all: no errors, but no transferring bits either.  These are probably just cases of me just not doing it the "WMP-Way", but whatever that is is not intuitive.

The more I thought about it, the more I realized that the three most common things I was still using WMP for were:

  1. Ripping CDs and syncing music to the player.
  2. Syncing music to the Sansa Clip.
  3. Burning podcasts onto CD so I can listen to them in my car.

I haven’t ripped a CD in months because I’ve been buying all my recent music online.  Burning podcasts onto CD is actually very painless in WMP, so I will probably continue using it for that.

But syncing?  Could I manage the music on the player directly?  Plugging the player into a USB port registers it as another storage device, available in Windows Explorer.  Could I just drag music onto it?  The short answer is "yes", but to really make this useful, I’d need to do a few more things:

  1. Reorganize the media files to clean up where Windows Media Player originally dropped them.
  2. Edit the media tags on the files so that Artist and Song Titles are accurate and simple.
  3. Maximize the number of songs I could fit onto the player by converting everything to MP3 format.
  4. Organize them into separate playlists to accommodate my current given mood.
  5. Sit back and enjoy the sweet sounds of victory.

Reorganize the media files

Most of my digital collection was actually ripped from my CD collection using Windows Media Player, which organizes it into a folder structure that looks like this:

Folder Structure

I’m really only interested in Artist and Song Title.  If I’m in the mood for John Williams, for example, I want to hear all of his work – I don’t care if it came from the "The Spielberg/Williams Collaboration", "Harry Potter and The Sorcerer’s Stone: Soundtrack", or one of the Star Wars albums I own.  I just want to hear the music of John Williams.  So, I decided to flatten the music by removing the Album level:

Folder Structure Flattened

Next, the track numbers that prefaced the song titles were making me twitch, so I removed them:

Folder Structure No Track Numbers

The next step was to resolve all of the "Unknown Artist", "Various Artists", and other folders that had been created over time, and move those music files into folders with a real artist name.  Some of these became obvious just from the name of the song – "Takin’ Care of Business" by Bachman-Turner Overdrive, for example.  Some of these, especially the classical pieces like "Violin Concerto No. 1", took a little more work to track down.  A lot of these required me to look at the media tags attached to the file, which we’ll address next.

Edit the media tags

Each audio file has a series of tags such as Artist, Album, Song Title, Track #, etc.  I originally used these to help reorganize the music into their proper artist folders, but many of these needed to be cleaned up themselves.  Why?  Because my Sansa Clip would use organize the music by these tags.  Putting the files in a folder in Windows Explorer called "Hans Zimmer" wouldn’t be enough – the song’s Artist media tag would need to reflect that name.

Originally I thought I needed an application to allow me modify these, but I discovered that Windows Explorer can do it.  When you select a music file in Windows Explorer, the window shows a series of controls at the bottom:

Media Tag Controls

All you have to do to change these is click the tag you want to edit, type over it, and hit Enter:

Media Tags - Editing

So, my first task was going through and cleaning up the "Contributing Artists", "Album artist", and "Title" for each of my music files.  After updating a few, I realized how tedious this was going to.  I don’t have an enormous digital music collection, but it’s large enough that I figured I could write something to automate the process faster than just doing it manually.

So I did.

I had already organized each music file into a folder named after the artist responsible, and had renamed the files themselves to clean up the song title (Several songs were named things like "Satisfied* [bonus tracks].mp3", so I cleaned it up to just be "Satisfied.mp3").  What if I could write a Powershell script (my shiny new tool in my development toolbox) to rework the media tags for each file based on this information?

After consulting my good friend, Google, I found people here and here were already managing media tags from Powershell.  Using TagLib# (available from GitHub: https://github.com/mono/taglib-sharp ), it was very easy to walk through my entire music collection, updating media tags as I went:

[Reflection.Assembly]::LoadFrom( (Resolve-Path ".\taglib-sharp.dll") )

$BaseMusicPath = "C:\Users\Mark\Desktop\Music"

Get-ChildItem -Path $BaseMusicPath -Filter "*.mp3" -Recurse | ForEach-Object {
    Write-Host "Processing:" $_.FullName
    $CurrentMediaFile = [TagLib.File]::Create($_.FullName)
   
    # Set the song title to the file name
    $CurrentMediaFile.Tag.Title = [IO.FileInfo]$_.Name
   
    # Make the AlbumArtists match the Artists (contributing artists)
    $CurrentMediaFile.Tag.AlbumArtists = $CurrentMediaFile.Tag.Artists
   
    # Save the new album name into the file
    $CurrentMediaFile.Save()
}

The script looks through my music folders recursively for every MP3, opens it, sets the "Title" media tag to the file name and the "AlbumArtists" media tag to the "Artists" tag.  The latter corresponds to the "Contributing Artists" tag that appears in Windows Explorer.

The script worked like a charm.  It ran through my entire collection in a matter of seconds, and took me about half an hour to piece it together.  Overall, I estimate it saved me at least an hour of drudgery, and gave me a great excuse to do something in Powershell.

 

Maximize the number of songs

I still had a mix of WMA and MP3 files at this point.  In the course of updating the media tags, I noticed there was a pretty large gap between the average file size of a WMA file and the average file size of an MP3 – WMAs were much larger than the MP3s.  I found a free converter from KoyoteSoft that could process my entire music collection in batch – converting all WMA files to MP3 in place.  I didn’t think to capture before and after totals, but the size savings was tremendous: 30% smaller files were very common.

I actually put the media tag editing on pause to convert everything over to MP3s.  That is why the Powershell script above only handles MP3s.  By the time I got around to writing it, EVERYTHING was an MP3.

 

Organize them into Playlists

The next, and what ended up being the biggest challenge, was figuring out how I could create my own playlists.  To be fair, I had not tried this with the Sansa Clip before.  What got me thinking about it was that there was a "Playlists" option on the Clip, hinting that it was supported and that I only had to figure out how to do it.

My good friend, Google, turned out to be a good start down this path.  I found this post on the Sansa Clip forums that pointed to a couple of possible paths:

  1. If I browsed to the folder on the Clip in Windows Explorer, and right clicked on a folder or music file, I had an option for "Create Playlist".  I tried selecting multiple folders and created a playlist from them.  That dropped a .PLA file in the folder, and the player seemed to like it.  The weird thing was that this file was 0 bytes long.  Examining the file properties (again through Windows Explorer) revealed a tab called "References" that listed out all of the songs I just dropped in.  That tab would allow me to remove songs, or reorder them, but there did not appear to be any way to add new ones to an existing playlist.  If I added a new song, I’d have to reselect all of the other songs AND the new one to effectively update the playlist.  That would become unwieldy fast.
  2. The other option I found in this forum post talked about the M3U playlist file format.  This was billed as a simple text file format, which seemed much more likely to be manageable going forward.

I ended up consulting several other internet destinations to figure out what this file needed to look like, and how to get it to work on the Clip:

In addition to these posts, I did a fair amount of my own experimentation to figure out the following procedure:

  1. Create a Windows 1252 (ANSI) text file and name it with a ".m3u" file extension.
  2. Add this as the very first line of the file: #EXTM3U
  3. Add one or more relative paths to the music files to be included in the playlist.  These would be relative to the "Music" folder on the Clip where the Artist folders would be housed:

        #EXTM3U
        Antonio Vivaldi\12 Violin Concerto, for violin, strings & continuo in E major (‘La Primave.mp3
        Antonio Vivaldi\Concerto For 2 Violins In A Minor, Op. 3 No. 8 – Allegro (Mouvement 1).mp3
        Antonio Vivaldi\Four Seasons- Spring Allegro.mp3
        Émile Waldteufel\Skaters Waltz.mp3
        Franz Liszt\Hungarian Rhapsody No 2.mp3
        Franz Schubert\Moment Musical.mp3
        Frédéric Chopin\Minute Waltz.mp3
        Georges Bizet\Carmen Suite 1 Les Toreadors.mp3

    This seemed to be the minimum contents needed to get the playlist to be recognized.

    For the most part, if I kept the files in a subfolder below the Artist name, the player would not recognize them.  My decision to flatten the music files to just one level down proved to be beneficial here.  I say "for the most part" because I did have one instance where a file was 2 levels down, in an "album" folder below the Artist folder, and the player found it.  I couldn’t explain why this worked, or why moving the other files up to the Artist folder caused them to suddenly be recognized by the player.  I thought it might have something to do with the length of the overall path, but as you can see from the above samples, some of the songs I have are quite long, and the player found those just fine.

  4. Switch the player to "MTP" mode.  For the Clip, this is found under Settings\USB Mode.  My player had been set to "Auto Detect".  At least two of the posts I found mentioned the other mode, "MSC", as being completely unusable for transferring playlist files to the player.  I have not tried changing this back to "Auto Detect" or trying "MSC", and then copying the playlist files over and seeing if they still worked.  I also didn’t dig into what these two modes are.  I had been working on the playlist issue for the better part of the week, and honestly, was just interested to see it resolved rather than exploring every nook and cranny.  Perhaps another day.
  5. Place this file in the root of the "Music" folder.  I tried a few different other locations for the playlist files on the player, including the "Playlists" folder, but this was the only one where it worked.

At this point, assuming that the music files were already on the player, the "Playlists" option on the player will now show the new playlist, and let you play from it.  I decided to go one step further with the playlists, not wanting to manage the playlists file by hand, and created a small WinForms application called "Playlist Forge" that would allow me to drag and drop individual music files, or entire folders, and construct the playlist file myself.

Playlist Forge

If you drag an M3U file onto Playlist Forge, it opens it.

Dragging a single music file (MP3 or WMA) onto it adds it to the playlist, including the name of the file and the parent folder.  (Playlist Forge assumes the folder structure I mentioned previously, where the actual music files are in a folder named after the Artist.)

Dragging a folder onto Playlist Forge will recursively find all MP3 or WMA files, and include them in the playlist, regardless of their depth.  It would still only include the file name and the folder it was actually in, but it would dig down as deeply as needed in the folder structure to pull out all of the music files.

Once you have the right files in there, you hit "CTRL-S" to save it.  If you had opened an M3U file originally, it would overwrite it.  If you had just started dragging music files onto it, it will create a new file called "NewPlaylist.m3u" on your desktop.

Finally, you can hit “CTRL-N” to clear the utility out and start a new playlist from scratch.

While this is definitely rough, it proved to be much faster to write this utility and use it than trying to pull all of the paths and files out manually.  It will also allow me to easily edit the files later, as I add music to my collection.

The utility – both the source and the compiled application – are found in the PlaylistForge.zip archive found at http://tinyurl.com/MarkGilbertSource if you are interested.  (And yes, I did see that other people had built apps like this already, but this seemed like a fun little app to write.)

 

Sit back and enjoy the sweet sounds of victory

A lot of research and work for this, but after all of it I am much happier about the state of my music collection and the prospects for managing it going forward.

January 8, 2013 Posted by | Powershell, Visual Studio/.NET | Comments Off on How I got my groove back – Music Files, Playlists, and the Sansa Clip

Meta Insanity 2 – Strings and Meta Tags revisited

In October of 2011, I posted about some funkiness with trying to embed an expression hole into the “content” attribute of a <meta> tag.  I dubbed it “Meta Insanity”.

This past week, at my functional language users group meeting (FLUNK),  we took a break from doing problems or talks on functional programming, and instead did lightning talks about something tech-related.  I built my talk around my October blog post.  The talk went great – everyone got into trying other combinations of markup to either 1) figure out why it was failing, or 2) figure out better ways to get around it than just appending empty strings.  What follows are some of the other things we discovered that worked and didn’t work.

For this list, “worked” means that a tag such as the following rendered the expression hole correctly:

<meta name="description1" content="<%=Me.SomeMetaValue%>" />

While “didn’t work” means that the leading angle bracket (<) was HTML encoded before the expression hole is evaluated, which led to rendered markup like this:

<meta name=”description1″ content=”&lt;%=Me.SomeMetaValue%>” />

  1. Moving the meta tag into the <body> tag worked.
  2. Putting a different tag with an expression hole, such as <script>, or even fake ones like <blah>, worked.
  3. Removing the runat=”server” attribute from the <head> tag worked for everything, including <meta> tags.
  4. Removing the double quotes from the markup, and adding them to the string that would be returned in the expression hole, worked.
  5. Adding attributes to the <head> tag didn’t work.
  6. Adding attributes to the <head> tag, and removing the runat=”server” attribute, worked.

And probably the best find of the evening (and by “best” I mean “face-palm-funny”) was HTML encoding the double quotes in the source markup, like this:

<meta name="description6" content=&quot;<%=Me.SomeMetaValue%>&quot; />

The rendered result?  The quotes stayed encoded, but the expression hole was properly evaluated:

<meta name=”description6″ content=&quot;Blah blah&quot; />
So, what did we accomplish?  It looks like the combination of runat=”server” and <meta> tags is the crux of this issue.  Meta tags will always be needed on a professional site, but I can’t remember the last time I needed to access the <head> tag from the server, so it seems like simply removing that attribute is now the cleanest way to get around this issue.

May 12, 2012 Posted by | Visual Studio/.NET | Comments Off on Meta Insanity 2 – Strings and Meta Tags revisited

Kinect in the Abstract: Working with the Sealed SkeletonData and JointsCollection classes

My latest side project involving the Kinect started to get a bit hairy.  The logic for what we were trying to do was at least an order of magnitude greater than the Target Tracking system my colleagues and I built last year.  It functioned, but it was getting exponentially more difficult to add features to it, let alone debug it.

So, suffering from a lull in my regular project work over the holiday break, I decided to start building some unit tests for it.  If nothing else, having a solid test suite would allow me to regression-test the application whenever I monkeyed with the code, and THAT would enable some good-sized refactorings that were long overdue.  My first task, then, was to figure out how to mock out the data coming off the Kinect.  My first task quickly hit a wall.

The application uses the SkeletonData object available in the SkeletonFrameReady event.  My original event handler looked something like this:

void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{

    List<SkeletonData> ActiveSkeletonsRaw;
    SkeletonFrame allSkeletons = e.SkeletonFrame;

    ActiveSkeletonsRaw = (from s in allSkeletons.Skeletons
                          where s.TrackingState == SkeletonTrackingState.Tracked
                          select s).ToList();

    this._MyManager.UpdatePositions(ActiveSkeletonsRaw);
}

The UpdatePositions() method would handle moving the objects around based on the new positions of the skeletons/joints, and that was the primary method I wanted to test.  I figured if I could create my own SkeletonData object, and pass that into UpdatePositions, I could test any scenario I wanted.  Unfortunately, the SkeletonData class is sealed, and there aren’t any public constructors on it.  So, I went the route of writing my own version of SkeletonData – one that I could create objects from, and would effectively function the same as SkeletonData:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Research.Kinect.Nui;


public class SkeletonDataAbstraction : ISkeletonData
{
    public JointsCollectionAbstraction Joints { get; private set; }
    public Vector Position { get; set; }
    public SkeletonQuality Quality { get; set; }
    public int TrackingID { get; set; }
    public SkeletonTrackingState TrackingState { get; set; }
    public int UserIndex { get; set; }


    public SkeletonDataAbstraction() 
    {
        this.InitializeJoints();
    }
    public SkeletonDataAbstraction(Microsoft.Research.Kinect.Nui.SkeletonData RawData) : this()
    {
        foreach (Joint CurrentJoint in RawData.Joints)
        {
            this.UpdateJoint(CurrentJoint);
        }
        
        this.Position = RawData.Position;
        this.Quality = RawData.Quality;
        this.TrackingID = RawData.TrackingID;
        this.TrackingState = RawData.TrackingState;
        this.UserIndex = RawData.UserIndex;
    }

    private void InitializeJoints()
    {
        this.Joints = new JointsCollectionAbstraction();
        foreach (JointID CurrentJointID in Enum.GetValues(typeof(JointID)))
        {
            this.Joints.Add(new Joint()
                                        {
                                            ID = CurrentJointID,
                                            Position = new Vector() { X = 0.0f, Y = 0.0f, Z = 0.0f, W = 0.0f },
                                            TrackingState = JointTrackingState.NotTracked
                                        });
        }
    }

    public void UpdateJoint(Joint NewJoint)
    {
        this.Joints[NewJoint.ID] = new Joint() 
                                                { ID = NewJoint.ID,
                                                  Position = new Vector()
                                                                            { X = NewJoint.Position.X,
                                                                              Y = NewJoint.Position.Y,
                                                                              Z = NewJoint.Position.Z,
                                                                              W = NewJoint.Position.W
                                                                            },
                                                  TrackingState = NewJoint.TrackingState
                                                };
    }
}

When the class is instantiated, the Joints collection is also instantiated with a "blank" Joint object for every joint defined by the Kinect (the complete list is defined by the Microsoft.Research.Kinect.Nui.JointID enumeration).  Then, the UpdateJoint method is called to overwrite those blank joints with the real values.  I also use this method in the unit tests to precisely place the joints I was interested in, just before running a given test.

I thought I would end up needing to mock out portions of the class, so I created an interface for it as well:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Research.Kinect.Nui;

public interface ISkeletonData
{
}

As it turns out, I didn’t need to mock anything out – I can just create SkeletonDataAbstraction classes, and pass them directly into UpdatePositions.  I decided to keep the interface around, just in case I later found something that required a mock.

I also needed to be able to construct a JointsCollection object (what the SkeletonData.Joints property is defined as), but that was also marked sealed with no public constructors.  So, I created a JointsCollectionAbstraction object for it:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Collections;
using Microsoft.Research.Kinect.Nui;
using System.Collections.ObjectModel;
using System.ComponentModel;

public class JointsCollectionAbstraction : List<Joint>, IEnumerable
{

    public Joint this[JointID i]
    {
        get
        {
            return this[(int)i];
        }
        set
        {
            this[(int)i] = value;
        }
    }


}

After putting these together, I rewrote my original application code using the new abstraction layer, to make sure I had captured everything I needed to:

void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) 
{

    List<SkeletonData> ActiveSkeletonsRaw;
    SkeletonFrame allSkeletons = e.SkeletonFrame;

    ActiveSkeletonsRaw = (from s in allSkeletons.Skeletons
                          where s.TrackingState == SkeletonTrackingState.Tracked
                          select s).ToList();

    List<SkeletonDataAbstraction> ActiveSkeletons;
    ActiveSkeletons = new List<SkeletonDataAbstraction>();
    foreach (SkeletonData CurrentSkeleton in ActiveSkeletonsRaw)
    {
        ActiveSkeletons.Add(new SkeletonDataAbstraction(CurrentSkeleton));
    }

    this._MyManager.UpdatePositions(ActiveSkeletons);
}

That worked like a charm.  With each SkeletonFrameReady event-raise, I copy the key pieces of information from the Kinect over to my own structures, and use those from that point on.  Now the task of writing tests around this could begin in earnest.  I wrote a "CreateSkeleton" method for my unit tests that would encapsulate setting one of these up:

private SkeletonDataAbstraction CreateSkeleton(SkeletonTrackingState NewTrackingState, int NewUserIndex)
{
    SkeletonDataAbstraction NewSkeleton;

    NewSkeleton = new SkeletonDataAbstraction();
    NewSkeleton.Position = new Vector();
    NewSkeleton.Quality = SkeletonQuality.ClippedBottom;
    NewSkeleton.TrackingID = NewUserIndex + 1;
    NewSkeleton.TrackingState = NewTrackingState;
    NewSkeleton.UserIndex = NewUserIndex;

    NewSkeleton.UpdateJoint(new Joint()
                                        {
                                            ID = JointID.HandLeft,
                                            Position = new Vector() { X = X_WHEN_HAND_MOVES_AWAY, Y = this._OriginalY, Z = this._OriginalZ, W = this._OriginalW },
                                            TrackingState = JointTrackingState.Tracked
                                        });
    NewSkeleton.UpdateJoint(new Joint()
                                        {
                                            ID = JointID.HandRight,
                                            Position = new Vector() { X = X_WHEN_HAND_MOVES_AWAY, Y = this._OriginalY, Z = this._OriginalZ, W = this._OriginalW },
                                            TrackingState = JointTrackingState.Tracked
                                        });

    // Other joints overwritten here...

    return NewSkeleton;
}

(Note, the values for X_WHEN_HAND_MOVES_AWAY, _OriginalY, _OriginalZ, and _OriginalW are merely floats, defined specific to the application.)

Now I could easily create a list of Skeletons to track, with joints positioned just so, and pass that structure into UpdatePositions.


After I had most of this built out, I found a couple of other posts from people doing essentially the same thing:

The first one is an interesting forum post where one of the Microsoft guys admits that declaring the SkeletonData and other classes Sealed was probably not the brightest idea.

Thankfully, the wall I hit ended up coming only up to my knees, so after a few bumps and bruises I was over it.

January 10, 2012 Posted by | Microsoft Kinect, Visual Studio/.NET | Comments Off on Kinect in the Abstract: Working with the Sealed SkeletonData and JointsCollection classes

With a little help from my Friends – TDD, Mocking, and InternalsVisibleTo

My current project is building a .NET library that will interface with multiple different web services.  Some of those services were not ready when I started the library, and I wanted to push myself further into mocking, so I wrote a .NET interface, and a wrapper class for each web service.  That allowed me to mock the services out, and simulate the response for a given request.  Once the web service became available, I’d implement that interface, and pass the requests through to the real services.

The goal here was to completely abstract the actual web service calls and responses from the user of the library.  However, in order for NUnit to be able to test those interface and other classes, they had to be declared Public.  That meant that someone actually using the library would see all of that structure in Intellisense – even when they would never use it, and would probably be confused by it.  This was the unfortunate tradeoff of TDD – or so I thought.

One of my colleagues, Doug, found a little assembly attribute called InternalsVisibleTo.  Applying this to your assembly (in the AssemblyInfo.vb/AssemblyInfo.cs class) allows non-Public members to be visible to the specified external assembly.  That allowed me to change the declaration on the Public classes that I didn’t really want exposed to a consumer of the library to Friend (the not quite equivalent to C#’s "internal" declaration).  That meant that I could effectively expose those classes and other items to my test assembly, but hide them from every other assembly.  For more information on this attribute, check these links out:

http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx#Y1557
http://devlicio.us/blogs/derik_whittaker/archive/2007/04/09/internalsvisibleto-testing-internal-methods-in-net-2-0.aspx

(Please note, the second is an older post by Derik Whittaker, and at the time he wrote it this was only available to C# assemblies; that has since been remedied – you can use it in VB now as well.)

So I added a line to expose this to my test assembly.  Then I started systematically changing Publics to Friends, recompiling, and re-running the unit tests.  I ran into a few bumps along the way.

First, I was doing a lot of constructor injection in association with the mocking where the parameter-less constructors would set up the lower-level objects, but the other variants would allow me to pass those objects in (the passed-in objects would be my mock objects).  In the course of this rework, I ended up hiding a lot of those classes.  Initially, the constructor variants that used them were marked Public, which the compiler had a fit about – I couldn’t expose those classes via the constructor parameters because the classes were now marked as Friend, but the constructors were marked as Public.  Changing the constructor designations to Friend solved this.

Second, when I started changing the classes used by my mocking framework, Moq, I found that the InternalsVisibleTo line allowing my test assembly wasn’t enough.  I figured the Moq assembly needed to be explicitly allowed, too.  I tried the code-roulette approach first, without success.  Then I consulted the internets, which of course had the answer.  Andrey Shchekin had the solution – DynamicProxyGenAssembly2 (http://blog.ashmind.com/2008/05/09/mocking-internal-interfaces-with-moq/).  Yeah, that was totally going to be my next guess.  Uh-huh.

So, I have my mocking/TDD cake and get to eat it too.  The library footprint is nicely trimmed back, without much of the original clutter, but I can still unit-test to my heart’s content.  Many thanks to Doug for finding this little gem!

The only thing I wasn’t able to accomplish with this was hiding the SOAP web service structure.  I tried changing the Visual Studio-generated classes to Friend, but that started failing when I tried to call the service.  Perhaps there is another attribute that at least hides these from Intellisense.  A search for another day.

December 7, 2011 Posted by | Visual Studio/.NET | Comments Off on With a little help from my Friends – TDD, Mocking, and InternalsVisibleTo

Meta Insanity – Strings and Meta Tags

It was another one of those days.  And this time, I DID go home after I saw this in action.

On many of my past Web Forms sites, I’ve had to include a dynamic meta/description tag in the header.  The “dynamic” parts come in when the page being rendered is a product page, a recipe page, or something else where the data is drawn out of a database.

When I first started doing this, I tried something like this*:

<meta name="description1" content="<%=Me.SomeMetaValue%>" />

 

Where “SomeMetaValue” is a property of the page, or a reference to a shared value somewhere else in the solution.  This version fails because the inner left angle bracket gets encoded, thus ruining the server-side expression hole:

<meta name=”description1″ content=”&lt;%=Me.SomeMetaValue%>” />

To get around this, I replaced the entire meta/description tag with an ASP.NET literal, and insert the value I wanted on the server side.  That worked.

Recently, this issue came up again.  Doug had taken over one of the sites where I had done this, and he refused to believe that the data wouldn’t render correctly in the expression hole.  He tried it, and sure enough the < got encoded.  Doug kept at it, though, and found that if you removed the double quotes around the expression hole, it rendered the value correctly:

<meta name="description2" content=<%=Me.SomeMetaValue%> />
 

Unfortunately, this was no longer valid HTML:

<meta name=”description2″ content=Blah blah />

Doug remembered an old issue he and I troubleshot, documented at System.String has me in knots.  He modified the original attempt to prepend an empty string:

<meta name="description3" content="<%="" & Me.SomeMetaValue%>" />

 

That renders perfectly:

<meta name=”description3″ content=”Blah blah” />

Putting the empty string after SomeMetaValue also works just fine:

<meta name=”description3″ content=”<%=Me.SomeMetaValue & “”%>” />

What.

The.

Heck?!?

I had written off the encoded angle bracket to an overly eager web server (Cassini and IIS behaved the same in this case), but why in the world would tacking on an empty string force it to NOT be encoded and allow it to work?  Since we’re dealing with silly strings here, I followed the lesson learned in System.String has me in knots, and added .ToString onto the end of Me.SomeMetaValue:

<meta name="description4" content="<%=Me.SomeMetaValue.ToString%>" />

 

 

Aaaaaand, we’re back to encoding:

<meta name=”description4″ content=”&lt;%=Me.SomeMetaValue.ToString%>” />

Sigh.  Oh Visual Studio.  Why must you taunt me so?

For a fully working – er, FAILING – sample of the above, check out the MetaInsanity.zip archive at http://TinyURL.com/MarkGilbertSource.

 

* I used “description1”, “description2”, etc. just for illustrative purposes for this blog post.  The production sites have this value as simply “description”.

October 19, 2011 Posted by | Visual Studio/.NET | 1 Comment

One web.config to rule them all – eh, not so fast

Well, I thought I had the hard parts worked out.

In my previous post I threw out this little bit of hand-waving:

The hard part of this was not figuring out how to put 5 environments’ worth of values into a single web.config – that’s already a solved problem in my shop (and I’m sure you could come up with your own approach).  The hard part here was figuring out to programmatically override the values that I would normally put into a web.config.

Well, as it turns out, changing out the values at runtime was the hard part here.  In fact, when it came to configuring ELMAH to generate email notifications, it was nearly insurmountable.

The mechanisms that we’ve developed to handle a "dynamic" web.config are all centered on the assumption that we can do the following (another gem I wrote in the last post):

At runtime, figure out the URL that the site is being executed under, and look up the values for that URL in the web.config.

And here’s where things fell apart.  You see,

  • In order to figure out the current URL, I needed a Request object to inspect.
  • The soonest I could get at the Request was the Application’s Begin_Request event.
  • The ELMAH module, however, was getting initialized before Application.Start, let alone Begin_Request.
  • Once initialized, ELMAH cached the now-incorrect settings, and provides no mechanism to force an update after the fact.

If I can’t override the settings when the application first starts up, how about forcing an override when the first error occurs?  There was an ELMAH event for that.  Unfortunately, being a module, the ELMAH email module ends up processing the error message AFTER the Request has already been processed, so it’s no longer available.

Rather than trying to come up with an even more elaborate (aka "convoluted") mechanism for saving off the current environment and then overriding ELMAH’s cache when the time came, I considered modifying the ELMAH source to initialize later, have a way to update the cached settings, etc.  Unfortunately, I had already sunk too much time trying to get this to work, so I opted for a much simpler solution – a simple email method of my own device, invoked in the Application.Error handler.  It functions, and occurs while the Request is still available, so I had a much easier time wiring it into the one web.config structure I had in place.  It’s not as robust as ELMAH, and certainly isn’t the way I wanted to go with it.  After using ELMAH for dozens of sites over the last few years, I had come to rely on it so completely.  It felt quite odd to have to go without it.

The good news is that the ELMAH mail module was the only one to give me trouble.  The logging module CAN be updated on Application.Begin_Request.  Although, even there I went a different route.  Instead of logging the errors to the file system, I opted to use the In-Memory provider.  Here is my revised web.config (only the ELMAH-specific bits are shown):

<?xml version="1.0"?>
<configuration>
  <configSections>

    <!-- 
            Error Logging Modules and Handlers (ELMAH) 
            Copyright (c) 2004-7, Atif Aziz.
            All rights reserved.
        -->
      <sectionGroup name="elmah">
        <section name="errorLog" requirePermission="false" type="Elmah.ErrorLogSectionHandler, Elmah"/>
        <section name="security" requirePermission="false"  type="Elmah.SecuritySectionHandler, Elmah"/>
        <section name="errorFilter" requirePermission="false" type="Elmah.ErrorFilterSectionHandler, Elmah"/>
      </sectionGroup>
  </configSections>

  
  <system.web>
    <httpHandlers>
      <add verb="POST,GET,HEAD" path="errors/report.axd" type="Elmah.ErrorLogPageFactory, Elmah"/>
    </httpHandlers>

    <httpModules>
      <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah"/>
      <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah"/>
    </httpModules>

  </system.web>


  <elmah>
    <errorLog type="Elmah.MemoryErrorLog, Elmah" />
    <security allowRemoteAccess="yes"/>
    <errorFilter>
      <test>
        <equal binding="HttpStatusCode" value="404" type="Int32"/>
      </test>
    </errorFilter>
  </elmah>

  
  <system.webServer>

    <modules runAllManagedModulesForAllRequests="true">
      <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah"/>
      <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah"/>
    </modules>

    <handlers>
      <add name="ElmahErrorReportingPage" verb="POST,GET,HEAD" path="errors/report.axd" type="Elmah.ErrorLogPageFactory, Elmah"/>
    </handlers>
  </system.webServer>

</configuration>

Since there was no longer any difference between the environments, I didn’t have to override anything.

The ActiveRecord initialization also seemed to go in as I expected.  Both ELMAH logging and ActiveRecord have been in place and working in our environments for a solid week now.

So lesson learned – get the code actually working before I blog on it.

October 17, 2011 Posted by | Castle ActiveRecord, Visual Studio/.NET | Comments Off on One web.config to rule them all – eh, not so fast

One web.config to rule them all

Most of the web-centric work I’ve done in my career and especially in the last four years has involved developing sites that are designed to be deployed to multiple environments: my local workstation, our internal Dev and Staging servers, and up to three environments at the client.  One of my strategies for managing the differences in file paths, email recipients, database connection strings, etc. among all of those environments is to push anything that changes from one environment to another into the web.config.  Then, I have a separate web.config for each environment, and I rename (or have the client’s technical staff rename) the appropriate file for a given environment.

TheSetup
That system has worked well for years.  That is, until a few weeks ago when I was assigned to a new client who insists that there only be a single web.config that covers all environments.  Doing this allows them to simply copy the files from one environment to the next wholesale, and eliminates the need to rename anything.  The other developers who have worked with this client for a while have created a couple of different frameworks for implementing this requirement, but they boil down to the same basic approach:

1) Put all values for all environments into the web.config, but tie them to the corresponding URL for each environment.

2) At runtime, figure out the URL that the site is being executed under, and look up the values for that URL in the web.config.

My colleagues affectionately refer to this scheme as “one web.config to rule them all”.

TheChallenge
A couple of the key components that I incorporate into the sites I work on are ELMAH and Castle ActiveRecord.  Naturally, since my current task involves building three brand new sites, I wanted to drop these in from the beginning.  The challenge was how to use them given this client’s requirement.  The hard part of this was not figuring out how to put 5 environments’ worth of values into a single web.config – that’s already a solved problem in my shop (and I’m sure you could come up with your own approach).  The hard part here was figuring out to programmatically override the values that I would normally put into a web.config.

TheSolution – ELMAH
Let’s start with ELMAH.  Normally, I’d have these sections in my web.config (only the ELMAH-specific portions are shown here):

<configuration>

  <configSections>
    <!-- 
            Error Logging Modules and Handlers (ELMAH) 
            Copyright (c) 2004-7, Atif Aziz.
            All rights reserved.
        -->
    <sectionGroup name="elmah">
      <section name="errorLog" requirePermission="false" type="Elmah.ErrorLogSectionHandler, Elmah"/>
      <section name="errorMail" requirePermission="false" type="Elmah.ErrorMailSectionHandler, Elmah"/>
      <section name="security" type="Elmah.SecuritySectionHandler, Elmah"/>
      <section name="errorFilter" type="Elmah.ErrorFilterSectionHandler, Elmah"/>
    </sectionGroup>
  </configSections>

  
  <system.web>
    <httpModules>
      <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah"/>
      <add name="ErrorMail" type="Elmah.ElmahMailModule"/>
      <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah"/>
     </httpModules>
  </system.web>

  <elmah>
    <errorLog type="Elmah.XmlFileErrorLog, Elmah" logPath="~/bin/Logs"/>

 


 <errorMail from="me@blah.com"
 to="blah@blah.com"
 subject="Test Error"
 smtpServer="blah.com"/>
 <security allowRemoteAccess="yes"/>
 <errorFilter>
 <test>
 <equal binding="HttpStatusCode" value="404" type="Int32"/>
 </test>
 </errorFilter>
 </elmah>

 

</configuration>
 

The things that differ per environment are:

*) The “logPath” attribute of the elmah/errorLog tag

*) The “to”, “subject”, and “smtpServer” properties of the “elmah/errorMail” tag.

My colleague, Joel, found that you can write a class that inherits from Elmah.ErrorMailModule, override the settings there, and use that in the httpModules block.  First, the class:

Public Class ElmahMailExtension
    Inherits Elmah.ErrorMailModule

    Protected Overrides Function GetConfig() As Object
        Dim o As Object = MyBase.GetConfig()
        o("smtpServer") = "mail.blah.com"
        o("subject") = String.Format("Blah message at {0}", Now.ToLongTimeString())
        o("to") = "me@blah.com"
        Return o
    End Function
End Class

And the web.config modification:

<configuration>
 
  <system.web>

    <httpModules>
      <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah"/>
      <add name="ErrorMail" type="MvcApplication1.ElmahMailExtension"/>
      <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah"/>
     </httpModules>

  </system.web>

</configuration>

Simple.

Overriding the logging settings requires a slightly different tack.  Instead of inheriting from Elmah.ErrorLogModule, I create a class that inherits from Elmah.XmlFileErrorLog:

Imports System.Web.Hosting

Public Class ElmahLogExtension
    Inherits Elmah.XmlFileErrorLog

    Public Sub New(ByVal config As IDictionary)
        MyBase.New(HostingEnvironment.MapPath("~/bin/Logs2"))
    End Sub

    Public Sub New(ByVal logPath As String)
        MyBase.New(logPath)
    End Sub

End Class

I couldn’t find a convenient collection to change values in, so I cheated.  Using JustDecompile, I looked at what the two constructors were doing.  They basically just manipulate the log path passed in.  So, I leave the New(string) variant alone, and modify the New(IDictionary) variant to ignore the incoming “config” parameter, and substitute the path that I want to use.  One of the things that I noticed the XmlFileErrorLog constructor doing was replacing paths with leading “~/” with the full path on the file system.  Full log paths won’t require this.
TheSolution – ActiveRecord

Here is a common ActiveRecord configuration for me (just the ActiveRecord-relevant parts are shown here):

<configuration>

  <configSections>
    
    <section name="activerecord" type="Castle.ActiveRecord.Framework.Config.ActiveRecordSectionHandler, Castle.ActiveRecord" />
    <section name="nhibernate" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" />
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />

  </configSections>

  <activerecord isDebug="false" threadinfotype="Castle.ActiveRecord.Framework.Scopes.HybridWebThreadScopeInfo, Castle.ActiveRecord">
    <config database="MsSqlServer2005" connectionStringName="MyTestDB">
    </config>
  </activerecord>

  <log4net>
  ...

  </log4net>

  <nhibernate>
  ... 
  </nhibernate>

  <connectionStrings>
    <add key="MyTestDB" value="Database=MyDBName;Server=MyServer,1433;User ID=MyUser;Password=MyPassword;" />
  </connectionStrings>

</configuration>

My Application_Start method in Global.asax initializes ActiveRecord:

Sub Application_Start()
    AreaRegistration.RegisterAllAreas()

    Dim MyConfig As IConfigurationSource = Castle.ActiveRecord.Framework.Config.ActiveRecordSectionHandler.Instance
    Dim MyAssemblies As System.Reflection.Assembly() = New System.Reflection.Assembly() {System.Reflection.Assembly.Load("MvcApplication1")}
    ActiveRecordStarter.Initialize(MyAssemblies, MyConfig)
    AddHandler Me.EndRequest, AddressOf Application_EndRequest

    RegisterRoutes(RouteTable.Routes)
End Sub

The primary piece that I need to override is the connection string.  My first attempts were in a similar vein as with ELMAH – in this case create a class that inherits from Castle.ActiveRecord.Framework.Config.ActiveRecordSectionHandler, and use that in the web.config.  However, I found an even easier way – simple use a different IConfigurationSource object in the call to ActiveRecordStarter.Initialize – one that is constructed programmatically.  As it turns out, there is even a built-in class to do this – InPlaceConfigurationSource:

Sub Application_Start()
    AreaRegistration.RegisterAllAreas()

Dim MyConfig As InPlaceConfigurationSource = InPlaceConfigurationSource.Build(DatabaseType.MsSqlServer2005, “Database=MyTest;Server=blahsqlsrvr,1433;User ID=blah;Password=blah;”) MyConfig.ThreadScopeInfoImplementation = GetType(Framework.Scopes.HybridWebThreadScopeInfo)

    Dim MyAssemblies As System.Reflection.Assembly() = New System.Reflection.Assembly() {System.Reflection.Assembly.Load("MvcApplication1")}
    ActiveRecordStarter.Initialize(MyAssemblies, MyConfig)
    AddHandler Me.EndRequest, AddressOf Application_EndRequest

    RegisterRoutes(RouteTable.Routes)
End Sub

Setting the ThreadScopeInfoImplemention property allows me to reproduce the “threadinfotype” property of the <activerecord> block in the web.config.

Using this allows me to completely dump the <activerecord> block and the configSection/section that references it:

<configuration>

  <configSections>
    
    <!--<section name="activerecord" type="Castle.ActiveRecord.Framework.Config.ActiveRecordSectionHandler, Castle.ActiveRecord" />-->
    <section name="nhibernate" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" />
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />

  </configSections>

  <!--<activerecord isWeb="true" isDebug="false" threadinfotype="Castle.ActiveRecord.Framework.Scopes.HybridWebThreadScopeInfo, Castle.ActiveRecord">
    <config database="MsSqlServer2005" connectionStringName="MyTestDB">
    </config>
  </activerecord>—>

  ...  
</configuration> 

TheConclusion

I’m not sold on the “one web.config to rule them all” approach to maintaining environment settings, but at least I don’t have to give up my favorite frameworks as a result.

September 28, 2011 Posted by | Castle ActiveRecord, Visual Studio/.NET | 5 Comments

Target-Tracking with the Kinect, Part 3 – Target Tracking Improved, and Speech Recognition

In Part 1 of this series, I went through the prerequisites for getting the Kinect/Foam-Missile Launcher mashup running.  In Part 2, I walked through the core logic for turning the Kinect into a target-tracking system, but I ended it talking about some major performance issues.  In particular, commands to the launcher would block updates to the UI, which meant the video and depth feeds were very jerky. 

In this third and final part of the series, I’ll show you the multi-threading scheme that solved this problem.  I’ll also show you the speech recognition components that allowed the target to say the word "Fire" to actually get a missile to launch. 

What did you say?

We had tried to implement the speech recognition feature by following the "Audio Fundamentals" tutorial.  That code looked like it SHOULD work, but there a couple of differences between the tutorial app and ours: the tutorial’s was a C# console application, while ours was a VB WPF application.  As it turns out, those two differences made ALL the difference.

For the demo, Dan (the host) mentions the need for the MTAThread() attribute on the Main() routine in his console app.  Since our solution up to this point was VB, it looked like we would need this.  I tried adding that to every place that didn’t generate a compile error, but nothing worked – the application kept throwing this exception when it fired up:

Unable to cast COM object of type ‘System.__ComObject’ to interface type ‘Microsoft.Research.Kinect.Audio.IMediaObject’. This operation failed because the QueryInterface call on the COM component for the interface with IID ‘{D8AD0F58-5494-4102-97C5-EC798E59BCF4}’ failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)).

Stack Trace:
       at System.StubHelpers.StubHelpers.GetCOMIPFromRCW(Object objSrc, IntPtr pCPCMD, Boolean& pfNeedsRelease)
       at Microsoft.Research.Kinect.Audio.IMediaObject.ProcessOutput(Int32 dwFlags, Int32 cOutputBufferCount, DMO_OUTPUT_DATA_BUFFER[] pOutputBuffers, Int32& pdwStatus)
       at Microsoft.Research.Kinect.Audio.KinectAudioStream.RunCapture(Object notused)
       at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
       at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx)
       at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
       at System.Threading.ThreadHelper.ThreadStart(Object obj)

I decided to try a different tack.  I wrote a C# console app, and copied all of Dan’s code into it (removing the Using statements and initializing the variables manually to avoid scoping issues).  That worked right out of the gate.  Since we were very short on time (this was two days from the demo at this point) I decided to port our application to C#, then incorporated the speech recognition pieces.

First, the "setup" logic was wrapped into a method called "ConfigureAudioRecognition" (I pretty much copied this right from the tutorial).  That method was invoked in the Main window’s Loaded event, on its own thread.  In addition to initializing the objects and defining the one-word grammar ("Fire"), this adds an event handler for the recognizer engine’s SpeechRecognized event:

private void sre_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
    if (this._Launcher != null && 
        this._IsAutoTrackingEngaged &&
        e.Result.Confidence > 0.95) { this.FireCannon(); }
}

The command to launch a missile is only given if the Launcher object is defined, the app is in "auto-track" mode, and the confidence level of the recognition engine is greater than 95%.  This last check is an amusing one.  Before I included this check, I would read a sentence that happened to contain some word with the letter "f", like "if", and the missile would launch.  Inspecting the Confidence property, I found that this only had a value in the 20-30% range.  When I said "Fire", this value as 96-98%.  The confidence check helps tremendously, but it’s still not perfect.  Words like "fine" can fool it.  It’s much better than having it fire with every "f", though.


Take a number

Doug, Joshua, and I discussed some solutions to the UI updates earlier in the week, and the most promising one looked like using BackgroundWorker (BW) to send a command to the launcher asynchronously.  That was relatively easy to drop into the solution, but I almost immediately hit another problem.  The launcher was getting commands sent to it much more frequently than my single BW could handle it, and I started getting runtime exceptions to the effect of "process is busy, go away".  I found an IsBusy property on the process that I could check to see if it had returned yet, but that meant that I would have to wait for it to come back before I could send it another command – basically the original blocking issue, but one step removed.

I briefly toyed with the idea of spawning a new thread with every command, but because they were all asynchronous there was no way to guarantee that they would be completed in the order I generated them in.  Left-left-fire-right looks a lot different than fire-right-left-left.  What I really needed was a way to stack up the requests, and force them to be executed synchronously.  What I found was an unbelievably perfect solution from Matt Valerio with his post titled "A Queued BackgroundWorker Using Generic Delegates".  As the title suggests, he wrote a class called “QueuedBackgroundWorker” that would add another BW to a queue, and then pop them off and process them in order.  This was EXACTLY what I needed.  This was also the most mind-blowing use of lambda expressions I’ve ever seen: you pass entire functions to run as the elements on the queue which get executed when that element is popped off the queue.

I added a small class called "CannonVector" that would roll up a direction (up, down, left, or right) and a number of steps.  Then, I created two methods – FireCannon() and MoveCannon() that would now wrap my calls to the launcher methods that Matt Ellis wrote (see Part 2 of this series):

private void FireCannon()
{
    QueuedBackgroundWorker.QueueWorkItem(
        this._Queue,
        new CannonVector
        {
            DirectionRequested = CannonDirection.Down,
            StepsRequested = 0
        },
        args =>
        {
            this._Launcher.Fire();
            return (CannonVector)args.Argument;
        },
        args => { }
    );
}


private void MoveCannon(CannonDirection NewDirection, int Steps)
{
    QueuedBackgroundWorker.QueueWorkItem(
        this._Queue,
        new CannonVector
        {
            DirectionRequested = NewDirection,
            StepsRequested = Steps
        },
        args =>
        {
            CannonVector MyCannonVector;
            MyCannonVector = (CannonVector)args.Argument;

            switch (MyCannonVector.DirectionRequested)
            {
                case CannonDirection.Left:
                    this._Launcher.MoveLeft(MyCannonVector.StepsRequested);
                    break;
                case CannonDirection.Right:
                    this._Launcher.MoveRight(MyCannonVector.StepsRequested);
                    break;
                case CannonDirection.Up:
                    this._Launcher.MoveUp(MyCannonVector.StepsRequested);
                    break;
                case CannonDirection.Down:
                    this._Launcher.MoveDown(MyCannonVector.StepsRequested);
                    break;
            }
            return new CannonVector
            {
                DirectionRequested = MyCannonVector.DirectionRequested,
                StepsRequested = MyCannonVector.StepsRequested
            };
        },
        args => { }
    );
}

Cool, huh?

With this in place, everything was smooth again – launcher movement and UI updates, alike.

And there was much rejoicing.

So there you have it.  Full source code for this solution can be found in the "KinectMissileLauncher.zip" archive here: http://tinyurl.com/MarkGilbertSource.  Happy hunting!

September 10, 2011 Posted by | Microsoft Kinect, Visual Studio/.NET | 2 Comments