Mark Gilbert's Blog

Science and technology, served light and fluffy.

Target-Tracking with the Kinect, Part 2 – Target Tracking

In Part 1 of this series I laid out the prerequisites.  Now we’ll get into how to turn the Kinect into a tracking system for the cannon.

Manual Targeting

As I mentioned in Part 1, one of the pieces to this puzzle was already written for us – a .NET layer around the launcher.  This layer was provided by Chris Smith in his Being an Evil Genius with F# and .NET post.  He links to this source code at the very end of the post, and included several projects.  We ended up using the RocketLib\RocketLauncher_v0.5.csproj project.

So, now we had a class that we could give commands to the launcher such as

Me._Launcher.MoveLeft(5)
Me._Launcher.MoveDown(10)
Me._Launcher.Fire()

Where “Me._Launcher” was an object of type RocketLib.RocketLauncher.  The numbers being passed to the “Move” commands are the number of times to move the launcher turrent.  The unit of “time” or “step” (as we came to refer to it) seemed to translate into a little less than half a degree of rotation (either left/right or up/down).

Armed with this knowledge (see what I did there?), we were able to whip together a little WPF interface that had five buttons on it – Up, Down, Left, Right, and Fire – that controlled the launcher manually.  That became the “Manual” mode.  The “Auto-track” mode, where the Kinect would control the launcher, would come next.

Auto-Targeting

Now we started going through the Kinect SDK Quickstart video tutorials, produced by Microsoft and hosted by Dan Fernandez.  To begin, we wanted to get to the raw position data (X, Y, and Z) from the camera.  We ended up compressing the first four tutorials (“Installing and Using the Kinect Sensor”, “Setting up the Development Environment”, “Skeletal Tracking”, and “Camera Fundamentals”) into a Friday to get ramped up as quickly as possible.

In “Skeletal Tracking Fundamentals”, Dan explains that the Kinect tracks skeletons, not entire bodies.  Each skeleton has 20 different joints, such as palms, elbow, head, shoulders, etc.  We decided to select the “ShoulderCenter” joint as our target.

Next, we added labels for the X, Y, and Z positions of the ShoulderCenter joint to the app, and then started moving around the room in front of the Kinect, seeing how the values changed.  The values are given in meters, with X and Y being 0 when you’re directly in front of the depth camera.  These values are updated in the SkeletonFrameReady event.

Now, the fun could really begin.  We decided to focus on left/right movement of our target, so the Y value is not used in the app at all.

We also decided that since the launcher had a real physical limitation as to how fast it could move, we couldn’t give it too many commands at a time.  The Kinect sends data 30 times a second, so we decided to sample the data twice a second (every 15 frames).

Our first attempt at this was very complicated and clunky, and didn’t work well unless you were at a magical distance from the Kinect (basically we threw enough magic numbers into the equation until it worked for that one distance).  We really ran into problems when we tried to extend that to work for any depth.

It was Doug that hit upon the idea of calculating the angle to turn the launcher as the arc tangent of X/Z as opposed to what we had been doing (the number of steps).  That did two things for us – first, the angle approach was correctly taking the depth information (Z measurement) into account, and second, it meant we only had to store the last known position of the launcher (measured as a number of steps, either positive or negative, with 0 being straight ahead).  If we knew the last position, and we knew where we had to move to, we could swivel the launcher accordingly.

Private Sub nui_SkeletonFrameReady(ByVal sender As Object, ByVal e As SkeletonFrameReadyEventArgs)
    Dim allSkeletons As SkeletonFrame = e.SkeletonFrame
    Dim NewCannonX, DeltaX As Integer Me._FrameCount += 1

    'get the first tracked skeleton Dim skeleton As SkeletonData = ( _
        From s In allSkeletons.Skeletons _
        Where s.TrackingState = SkeletonTrackingState.Tracked _
        Select s).FirstOrDefault()

    Dim ShoulderCenter = skeleton.Joints(JointID.ShoulderCenter)

    Dim scaledJoint = ShoulderCenter.ScaleTo(320, 240)
    Me.UpdateCrossHairs(scaledJoint.Position.X, scaledJoint.Position.Y, scaledJoint.Position.Z)

    Me.HorizontalPosition.Content = ShoulderCenter.Position.X
    Me.VerticalPosition.Content = ShoulderCenter.Position.Y
    Me.DepthPosition.Content = ShoulderCenter.Position.Z

    Dim NormalizedX As Integer = CType(ShoulderCenter.Position.X * 10, Integer)
    Dim AbsoluteX As Integer = Math.Abs(NormalizedX)

    If (Me._IsAutoTrackingEngaged) Then 
 If (ShoulderCenter.Position.Z > 0) Then 
 ' The multipliers of 100 * 1.6 are needed to convert the degrees to move into steps for the cannon 
            NewCannonX = Math.Atan2(ShoulderCenter.Position.X, ShoulderCenter.Position.Z) * 100 * 1.6
            DeltaX = Math.Abs(NewCannonX - Me._LastCannonX)
            If (NewCannonX < Me._LastCannonX) Then 
 Me._Launcher.MoveRight(DeltaX)
            Else 
 Me._Launcher.MoveLeft(DeltaX)
            End If 
 Me._LastCannonX = NewCannonX
            Me._NetCannonX = NewCannonX
        End If 
 End If 
End Sub

With this logic in place, the tracking became fairly good, regardless of the distance between the target and the Kinect.

Assumptions Uncovered

Since there really wasn’t any feedback that the launcher could give us about it’s current position, this logic make a couple of major assumptions about the world.  First, the Kinect and the launcher have to be pointed straight ahead to begin with, and second, the Kinect needs to remain pointing ahead.

We uncovered the first assumption when the launcher stopped responding to commands to move right.  We could move it to the left, but not to the right.  We fired up the application that comes with it, and discovered a “Reset” button that caused the launcher to swivel all the way to one side, then to a “center” point.  This center point was actually denoted by a raised arrow on the launcher’s base – something I had not seen up to this point.  After we reset it, it would move left and right just fine.  As it turns out, the launcher can’t move 360 degrees indefinitely – it has definite bounds.  The reset function moved it back to center to maximize the left/right motion.

After we discovered that, I would jump out to that app to reset the launcher, and then I had to shut it down again before I could use ours (two apps couldn’t send commands to the launcher – in fact we got runtime errors if we tried to run both apps at the same time).  After a while that got old, so we included a reset of our own.  Since we knew the launcher’s current position, we’d just move in the opposite direction that amount.  We added a Reset button to our own app, and also called the same method when the app was put back to Manual tracking and when it was shut down.

We uncovered the second assumption in a rather amusing way.  During one of our tests we noticed the cannon was constantly aiming off to Doug’s (our target at the time) right.  He could move left or right, but the launcher was always off.  He happened to look up and noticed that the Kinect had been bumped, so it wasn’t pointing directly ahead any more.  As a result, the camera was looking off to one side and all of its commands were off.  After that, we were much more careful about checking the Kinect’s alignment, and not bumping it.

Some fun to be had

Early on we had thought up a “fun” piece of icing on this electronic cake.  What if we took the video image from the camera, and superimposed crosshairs on it?  We could literally float an image with a transparent background over the image control on the form.  If we could get the scaling right, it could track on top of the user’s ShoulderCenter joint.

And we did.  This is turned on using the “Just for Mike” button at the bottom of the app.  During the agency meeting demo, I had walked through the basic tracking, using Mike (our President) as the target, and explained about the video and depth images.  Then – very dramatically – I “noticed” the screen and turned to Doug (who was running the computer) – “uh, Doug?  I think we’re missing something.”  At which point he hit the button to add the cross hairs to the video image. “There we go!  That’s better.”  Mike got a good laugh out of it, as did most of the rest of the audience.  Fun?  Check!

Beyond the fun, though, I thought it was cool that we could merge the video and depth information to such great effect.  Between having the launcher track you, and seeing the cross hairs on your chest – it’s downright eerie.

Performance Issues

So, by this point, we had launcher tracking, both video and depth images refreshing 30 times a second, and crosshairs.

And everything was running on the same thread.

Yeah.  We now had some performance issues to solve.

When the launcher moved at all, and especially when it fired (which took 2-3 seconds to power up and release), the images would completely freeze, waiting for the launcher to complete.  The easy solution?  Duh!  Just put the launcher and the image updates on their own threads.  Um, yeah.  That turned out to be easier said than done.  We’ll cover the multi-threading solution, as well as the speech recognition features in Part 3.  Those two topics turn out to be intertwined.

Update: Full source code for this solution can be found in the “KinectMissileLauncher.zip” archive here: http://tinyurl.com/MarkGilbertSource.

Advertisements

September 10, 2011 Posted by | Microsoft Kinect, Visual Studio/.NET | 2 Comments

Target-Tracking with the Kinect, Part 1 – Intro and Prerequisites

Doug started it.

On some Friday back in late June, he mentioned that Microsoft had released a Beta SDK for the Kinect just the week before.  He asserted “We need to do something with it.  I have a Kinect I can bring in.”  By “do something with it” he meant in the Friday lunch sessions we’d been holding for a year called “Sandbox”.  Sandbox was where a small group of us got together to work with something we didn’t normally get to use during our day jobs.  We tried to keep it light and fluffy, and the Kinect fit both to a T.

Over that next weekend, I mulled over what we could do with the Kinect.  What would be a good enough demonstration of this electronic marvel?  And then it hit me – Joel (another Sandbox participant) had a USB powered foam-missile launcher (like the one pictured here: http://www.amazon.com/Computer-Controlled-Foam-Missile-Launcher/dp/B00100K5RM/ref=pd_sim_t_3).  Now we’re talking!

Microsoft had released a series of tutorials for the SDK, and a .NET interface to the launcher that Chris Smith used for his “evil genius” post.  Chris attributes the original code for this wrapper to Matt Ellis.

We decided that the Kinect would feed commands to the launcher telling it where to aim, and we’d use the speech recognition abilities of the Kinect to let the person in the cross-hairs say the word “fire” to send off a missile.  And so began a series of Fridays where we hacked together an app that turned the Kinect into a target-tracking system for the launcher.  We thought we had the hard stuff already done for us – we simply needed to write something that would connect A to B.  But, as any good project should be, we found the easy parts weren’t so easy, and were pushed and prodded into learning something new.  This is the first of a three-part series describing our solution.

Before we dive into code, I want to call out the software, frameworks, and SDKs that were ultimately needed for this project.  Some of these were called out by the quickstart tutorials, and the rest were discovered along the way.  These were installed in this order:

  • First, our laptop started with Visual Studio 2010 Professional, but in the quickstart tutorials, Dan (Fernandez) mentions that he’s working with the Express version.
  • The sample application that comes with the launcher.  This includes the drivers for the launcher itself, USBHID.dll.  The .NET wrapper provided by Matt Ellis will poke into the OpenHID method to send commands to the launcher.
  • The DirectX End-User Runtime.  This is required by the DirectX SDK, and is available from here.  This installer will need to be run as an Administrator (on Windows 7, anyway), and I had to do a manual restart of the machine after it finished.  The installer will not prompt you to do this, but the DirectX SDK (the next step) wouldn’t install correctly until I did.
  • The latest DirectX SDK.  This is a very large install – 570 MB – available from http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=6812.  I had to make sure nothing else was running when I installed this, and had to install it as an Administrator.
  • The Kinect SDK itself, available from http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/.
  • The Coding4Fun Kinect Library available from http://c4fkinect.codeplex.com.  This is not strictly required, but contains a couple of extension methods that simplify translating the Kinect camera data into images.

The above was enough to get going with tracking, but not for the speech recognition we wanted to incorporate.  In particular, we wanted to be able to say the word “Fire”, and then have the launcher fire the missile.  For this, we needed a few additional pieces.  (I found these out from Patrick Godwin’s excellent post here: http://www.ximplosionx.com/2011/06/22/intro-to-the-kinect-sdkadding-speech-recognition/):

In Part 2, I’ll walk through controlling the launcher manually using Ellis’ class, and then our first pass at controlling the launcher using the data coming off of the Kinect.

In Part 3, I’ll go through the threading that we discovered we needed in order to make the application perform better, and why we ended up converting the entire application to C#.

Update: Full source code for this solution can be found in the “KinectMissileLauncher.zip” archive here: http://tinyurl.com/MarkGilbertSource.

September 7, 2011 Posted by | Microsoft Kinect, Visual Studio/.NET | 4 Comments

References available upon request – Passing variables by reference in C# and VB

On my most recent MVC project, I hit a weird snag when it came to passing a variable by reference in C#.  I had a class that descended from System.Web.Mvc.Controller, and I was trying to pass it to a method that took a reference to System.Web.Mvc.Controller (not my descendent class).  This resulted in compile-time errors (which I’ll show shortly).  This alone wouldn’t result in a blog post, except that the exact same thing in VB works fine.

While this was happening with the System.Web.Mvc.Controller class, this is actually a difference in how C# handles any variable being passed by reference, so I’ve worked up a very simple solution to illustrate this.  First, here is the working VB sample:

Module MainModule

    Sub Main()
        Dim ChildObject As Child

        ChildObject = New Child
        ChildObject.PersonName = "Mark"
        Console.WriteLine("VB Example" & vbCrLf & "**********" & vbCrLf)
        Console.WriteLine(String.Format("Original Name: {0}", ChildObject.PersonName))
        UpdateName(ChildObject)
        Console.WriteLine(String.Format("New Name:      {0}", ChildObject.PersonName))
        Console.ReadLine()
    End Sub


    Private Sub UpdateName(ByRef ObjectToModify As Parent)
        ObjectToModify.PersonName = "Mark II"
    End Sub


    Public Class Parent
        Public PersonName As String
    End Class

    Public Class Child
        Inherits Parent
    End Class

End Module

And now, my first attempt at porting this to C#:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

class MainModule
{
    static void Main(string[] args)
    {
        Child ChildObject;

        ChildObject = new Child();
        ChildObject.PersonName = "Mark";
        Console.WriteLine("C# Example\n**********\n");
        Console.WriteLine(String.Format("Original Name: {0}", ChildObject.PersonName));

        // Line in error
        UpdateName(ref TestCSObject);

        Console.WriteLine(String.Format("New Name:      {0}", ChildObject.PersonName));
        Console.ReadLine();
    }

    
    private static void UpdateName(ref Parent ObjectToModify) {
        ObjectToModify.PersonName = "Mark II";
    }


    class Parent
    {
        public string PersonName;
    }
    class Child : Parent
    {
    }
}

The UpdateName() call in the C# version results in two compile errors:

The best overloaded method match for ‘MainModule.UpdateName(ref MainModule.Parent)’ has some invalid arguments   

Argument 1: cannot convert from ‘ref MainModule.Child’ to ‘ref MainModule.Parent’

When I first hit this, I figured it was having a problem casting between objects of type Child and Parent, even though functionally the classes were the same (meaning I did not override Parent properties or methods in Child, nor did I add anything new to Child).  So, in my second attempt I explicitly cast the object on the way in to UpdateName():

static void Main(string[] args)
{
    Child ChildObject;

    ChildObject = new Child();
    ChildObject.PersonName = "Mark";
    Console.WriteLine("C# Example\n**********\n");
    Console.WriteLine(String.Format("Original Name: {0}", ChildObject.PersonName));

    // Line in error
    UpdateName(ref (Parent)TestCSObject);

    Console.WriteLine(String.Format("New Name:      {0}", ChildObject.PersonName));
    Console.ReadLine();
}

Now, the compiler was throwing this on the UpdateName() call:

A ref or out argument must be an assignable variable

So, it looks like the explicit cast worked for getting the variable in, but the compiler was still confused how to handle it on the way back out.  So, my third attempt to was to use an explicitly-casted, and more importantly, separate, Parent object:

static void Main(string[] args)
{
    Child ChildObject;

    ChildObject = new Child();
    ChildObject.PersonName = "Mark";
    Console.WriteLine("C# Example\n**********\n");
    Console.WriteLine(String.Format("Original Name: {0}", ChildObject.PersonName));

    Parent ParentObject;
    ParentObject = (Parent)ChildObject;
    UpdateName(ref ParentObject);
    ChildObject = (Child)ParentObject;

    Console.WriteLine(String.Format("New Name:      {0}", ChildObject.PersonName));
    Console.ReadLine();
}

That worked.  I then pressed my luck and tried leaving out the explicit casts:

static void Main(string[] args)
{
    Child ChildObject;

    ChildObject = new Child();
    ChildObject.PersonName = "Mark";
    Console.WriteLine("C# Example\n**********\n");
    Console.WriteLine(String.Format("Original Name: {0}", ChildObject.PersonName));

    Parent ParentObject;
    ParentObject = ChildObject;
    UpdateName(ref ParentObject);
    ChildObject = ParentObject;

    Console.WriteLine(String.Format("New Name:      {0}", ChildObject.PersonName));
    Console.ReadLine();
}

The ParentObject assignment before UpdateName() worked fine, but the ChildObject assignment threw this error:

Cannot implicitly convert type ‘MainModule.Parent’ to ‘MainModule.Child’. An explicit conversion exists (are you missing a cast?)

I wouldn’t say I was "missing" it, Bob.

I then cracked open the two executables in JustDecompile to see what these two boiled down to.  As it turns out, the key piece of logic – the call to UpdateName – is identical whether I use the working VB sample, or the working C# sample.

VB:

VB in JustDecompile

 

C#:

CS in JustDecompile

So, for better or worse, the VB environment is simply doing more for me than the C# environment.  It knew that I was trying to pass a descendent object, and casted it for me under the covers.

The full source code for this sample can be found in the PassByRef.zip archive on my SkyDrive: http://tinyurl.com/MarkGilbertSource.  The solution requires Visual Studio 2010, but this works the same way with Visual Studio 2008 (which is what we wrote the MVC application in originally, and where I first saw this behavior).  For the C# application, I’ve included all four attempts as complete copies of the Main() routine.  To try one, simply uncomment it, and comment out the others.

August 24, 2011 Posted by | Visual Studio/.NET | Comments Off on References available upon request – Passing variables by reference in C# and VB

So secure I can’t use it – Oracle ODP and Windows 7 User Access Control

No more Windows XP for Mark.  I say that with at least a tear of sorrow – XP was a champion operating system for me for over 7 years.

As of a couple of weeks ago, my primary workstation at Biggs was replaced, and that meant moving from Windows XP to Windows 7.  I won’t go into all of the details as to why I was still using Windows XP in 2011, but a large part of it has to do with the issues I’m going to describe here.

This issues really center around two things – the Oracle Data Provider (ODP) and Windows User Access Control (UAC).

SELECT * FROM Grrr
The issues with ODP started right at the beginning, with the installer itself.  When I build an application that will go against Oracle, I have to use the Oracle 10.2 client (since that’s what the customer is using).  When I first tried to run the installer for the 10.2 client, it died saying that it would only support through Windows version 6.  So, I had to set the compatibility to Windows XP SP3 (right click on the .exe and go to properties, then go to the Compatibility tab).

That got me past the initial check, but the install failed later in the process.  I then tried restarting the installer as an Administrator (still with Windows XP compatibility enabled).  It got further this time, but not quite to the end. 

The installer was now failing at the 98% mark because it was trying to register some files using C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\Gacutil.exe.  That folder existed on my machine, but not that file.  How nice that the developer who put the Oracle installer together didn’t include something that could register things with the GAC, and furthermore assumed that I would have VS 2005 (aka Visual Studio 8 ) on my machine?  I checked my home machine (which had Visual Studio 2005 on it at one point) and found that I had that file, so I copied Gacutil.exe and Gacutil.exe.config from my home machine to my work machine.  This time, the installer finished successfully.  (I considered looking for Gacutil.exe elsewhere on my machine and just copying it into the expected folder, but since this worked I didn’t bother going down that route.)  Yay for me.

When I tried to use ODP, however (e.g., running a site in Debug mode on my machine, or running a test through NUnit that hit the database), it failed with a very generic Oracle error (something to the effect of "Oracle error has occurred").  Through trial and error, I found if I ran THOSE applications as an Administrator, the connections to Oracle would work fine.

Let me be clear.  I wasn’t trying to run Oracle locally – I was trying to connect to our development Oracle instance from my workstation.  How were the connections to our development SQL Server instance working?  Yeah, those worked fine on my machine whether I ran Visual Studio and NUnit as an Administrator or not.

How about PL/SQL Developer?  Couldn’t connect to Oracle unless it was run as an Administrator.

How about SQL Server Management Studio?  Worked fine as Administrator or not.

Grrr

My domain account is an administrator of my local machine, but apparently that isn’t enough.  I found that if I completely disabled UAC, however, these would run correctly.  I was really tempted to just disable UAC altogether – which does work, by the way – but I really wanted to find a way to work within the system, which boiled down to running Visual Studio, NUnit, and PL/SQL Developer as an Administrator, every time. 

Why is nothing ever easy?
Running these apps as Administrator was a bit of a pain, however, mostly because I kept forgetting that I needed to do it.  So, I set out to try to smooth out the process of launching applications under UAC.

I religiously use an application launcher called SlickRun.  I hit Windows-Q to bring up the SlickRun command line, type in a customized keyword, or "MagicWord", for the application, site, or command I want to run, and hit Enter.  So, for example, to bring up PL/SQL Developer I would hit Windows-Q, type in "oracle", and hit Enter.

SlickRun has an option that can be checked per MagicWord called "Prompt for user-account (aka elevate to admin)".  I turned that on for PL/SQL Developer, so when I started it up it would throw up the UAC prompt.  Once I hit "Yes" (or Shift-Tab then Space to click the "Yes" button without my fingers leaving the keyboard) PL/SQL Developer would open as an Administrator.  I did the same thing for Textpad and NUnit.

My next challenge was Visual Studio.  The normal way I open a solution in VS is to browse to the .sln file on my file system and double-click it.

For that, I tried marking the Visual Studio executable to always run as Administrator:

  • Right click on C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe
  • Select Properties
  • Click on the Compatibility tab
  • Click "Run this program as an administrator"

Once I did that, however, double-clicking the SLN file did nothing.

Then I remembered that SLN files are set to open with the Visual Studio Version Selector, which reads the SLN file and opens it in the correct version of Visual Studio.  I found that executable (C:\Program Files\Common Files\microsoft shared\MSEnv\VSLauncher.exe), and tried the same "Run this program as an Administrator" trick.  That didn’t work.  According to this StackOverflow article, that WOULD have worked prior to SP1 being installed: http://stackoverflow.com/questions/3304425/visual-studio-version-selector-doesnt-open.  After SP1, you have to hack the VSLauncher.exe manifest:

  • Back up VSLauncher.exe and VSLauncher.exe.manifest
  • Run VS Command Prompt as an Administrator
  • Switch into the C:\Program Files\Common Files\microsoft shared\MSEnv folder
  • Run this command:  mt -inputresource:"VSLauncher.exe" -out:VSLauncher.exe.manifest
  • Alter the VSLauncher.exe.manifest file, specifically the "level" attribute of the requestedExecutionLevel tag:

                    <requestedPrivileges>
                        <requestedExecutionLevel level="requireAdministrator" uiAccess="false">
                        </requestedExecutionLevel>
                    </requestedPrivileges>

  • Run this command:  mt -outputresource:VSLauncher.exe -manifest VSLauncher.exe.manifest

That allows the Launcher to run as an Administrator, so it can now launch Devenv.exe as an Administrator.

<rant>
Why in the world is any of this necessary?  Why are local admin rights not sufficient for running a site against Oracle on my local machine?  Alternatively, do connections to SQL Server from my local machine require similarly inflated privileges, but because it’s a Microsoft data connector on a Microsoft operating system, it just works more smoothly?
</rant>

Even after all of this, though, I don’t regret moving to Windows 7.  There are a lot of things I like about it, but there needs to be some additional grey matter applied to the concept of application security.

August 10, 2011 Posted by | Oracle, Visual Studio/.NET, Windows 7 | Comments Off on So secure I can’t use it – Oracle ODP and Windows 7 User Access Control

One too many cookies – Visual Studio and FireCookie

My most recent project held most of the information tied to the current user’s session in cookies.  I needed to be able to pass information among the client logic, the server logic, and the Flash application hosted on the site.  Cookies seemed to be the common and easiest medium for that.  For the most part, that architectural decision turned out to be a good one, with a couple of exceptions.

The exceptions are the topic of this blog post and – of course – most of the issues I’ll describe were simply me learning how the world worked.

For the purposes of today’s post, I’ll use “Flash” to refer to the Flash application hosted in the site, “Client” to represent the client-side JavaScript logic, and “Server” to represent the ASP.NET MVC 2 server application.

 

Taking the red pill

Flash was hosted on the home page of the site, and we wanted it to display a special background image when the user visited the home page in a certain way.  If you visited the default home page, “/Home.aspx/Index”, Client would set a cookie called “background” that contained a default value, but only if the “background” cookie wasn’t already set by the Server.  Whatever value ended up in the cookie, Flash would see it and swap in the corresponding background image.  If, however, you visited one of the “themed” home pages, such as "/Home.aspx/Paper”, then Server would set the “background” cookie, thus preempting Client.  Once this cookie was set, as long as the user didn’t browse to a different themed home page, this cookie would persist during the session, and every subsequent request to the home page would have that background.

At least, that was the theory.  During some of our initial testing, we found that Flash was having problems displaying the correct background every time.  We eventually tracked it down to the value of the “background” cookie.

I ran the site through Visual Studio so I could step through the Server logic to see what was happening.  On the first request to /Home.aspx/Paper, the Server logic would set the “background” cookie in the Response.  The view would then render.  Then, on the next post back a second “background” cookie would appear in Request.Cookies. 

Excuse me?

Oh, and it got worse.  If that second request was for another themed page, my Server logic would add a new cookie to Response (as I had expected), but a copy of that same cookie would also be added to the Request.Cookies collection

What. The. Heck?!?

 

Down the rabbit hole

My first thought at this point was that for some reason, Server was not able to overwrite the cookies being set either by itself or by Client.  I spent several hours trying different methods of adding cookies before I finally came across a StackOverflow.com article that referenced the MSDN documentation on cookies:

ASP.NET includes two intrinsic cookie collections. The collection accessed through the Cookies collection of HttpRequest contains cookies transmitted by the client to the server in the Cookie header. The collection accessed through the Cookies collection of HttpResponse contains new cookies created on the server and transmitted to the client in the Set-Cookie header.

After you add a cookie by using the HttpResponse.Cookies collection, the cookie is immediately available in the HttpRequest.Cookies collection, even if the response has not been sent to the client.

(Emphasis mine; source: http://msdn.microsoft.com/en-us/library/system.web.httpresponse.cookies.aspx)

Ok, that at least explains why cookies were showing up in both Response.Cookies and Request.Cookies. 

With that past me, I turned my attention to why Request.Cookies still had two – one set by Client and one set by Server.  Through a lot more experimentation, I found that my Server cookies were, in fact, being overwritten, but in a round-about way.

Let’s get to the code.  To reproduce this I created an empty MVC 2 application, added a HomeController:

Public Class HomeController
    Inherits System.Web.Mvc.Controller

    Function Index() As ActionResult
        Return View()
    End Function

    <HttpPost()> _
    Function Index(ByVal SubmitButton As String) As ActionResult
        Dim ServerCookie As HttpCookie

        ServerCookie = New HttpCookie("background", String.Format("server-cookie-here: {0}", Now.Ticks))
        HttpContext.Response.Cookies.Add(ServerCookie)

        Return View()
    End Function

End Class

And the Home.aspx/Index view:

<%@ Page Language="VB" Inherits="System.Web.Mvc.ViewPage" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
    <title>Index</title>
</head>
<body>
    <script type="text/javascript">
        function setCookie(c_name, value, exdays) {
            var exdate = new Date();
            exdate.setDate(exdate.getDate() + exdays);
            var c_value = escape(value) + ((exdays == null) ? "" : "; expires=" + exdate.toUTCString());
            document.cookie = c_name + "=" + c_value;
        }

        function getCookie(c_name) {
            var i, x, y, ARRcookies = document.cookie.split(";");
            for (i = 0; i < ARRcookies.length; i++) {
                x = ARRcookies[i].substr(0, ARRcookies[i].indexOf("="));
                y = ARRcookies[i].substr(ARRcookies[i].indexOf("=") + 1);
                x = x.replace(/^\s+|\s+$/g, "");
                if (x == c_name) {
                    return unescape(y);
                }
            }
        }

        if (getCookie("background") == null) {
            setCookie("background", "client-cookie-here", null);
            document.write("Client cookie set!");
        }
    </script>

    <div>
        <%Html.BeginForm()%>
            Click here to post back: <input type="submit" value="Submit" id="SubmitButton" />
        <%Html.EndForm()%>
    </div>
</body>
</html>

The setCookie and getCookie JavaScript functions used for this demo were pulled from http://www.w3schools.com/js/js_cookies.asp, and were NOT the actual methods I was using when I found this problem (we have a custom library for managing cookies).  However, the source problem ended up being unrelated to the specific Client cookie library.

Finally, I hacked the RegisterRoutes() function in Global.asax.vb to allow for this page to be served as Home.aspx/Index, which was the default home page for the site (again, this is an artifact of this sample which has no real bearing on the problem being analyzed; I include it here just for completeness):

Shared Sub RegisterRoutes(ByVal routes As RouteCollection)
    routes.IgnoreRoute("{resource}.axd/{*pathInfo}")

    routes.MapRoute( _
        "Default", _
        "{*pathInfo}", _
        New With {.controller = "Home", .action = "Index", .id = UrlParameter.Optional} _
    )

    routes.MapRoute( _
        "CatchAll", _
        "{controller}/{action}/{id}", _
        New With {.controller = "Home", .action = "Index", .id = UrlParameter.Optional} _
    )

End Sub

This is enough logic to reproduce the error I was seeing.  I set a break point on the POST version of HomeController.Index(), and inspected the cookie collections:

image

Ignore the third and fourth watches – those will come into play later.  Notice that there are zero Response cookies set so far, and the only Request cookie is from the Client.  The Client cookie was set when the GET version of Index was executed.  So far so good.

Now, if I step through the rest of the Index action method, the Server cookie is added to both collections:

image

Both the Response and Request collections get the new Server cookie.

But why wasn’t the Client cookie being overwritten by the Server cookie in the Request collection?  Why are there two?  If I let the form make another round trip to the Client, things begin to get really hairy:

image

Here we have the form being posted back, but the new Server cookie hasn’t been set yet.  Notice that the Response collection is, again, zero, but there are still two cookies in Request.  If I step through the action method, that’s where the hair-pulling REALLY began:

image

My Response collection looks fine.

My Request collection, however, is just getting out of hand.  Not only do I have both the Client and the Server cookies, I have TWO Server cookies!

At this point, I was convinced that my logic for overwriting cookies was simply not working.  But, as I said before, those hours of research turned up nothing.  If I had just pressed on, I would have saved myself a bit of sanity.  When I let the form make yet another round trip to the Client, I get this:

image

A couple of things to notice here.  The first is that my Request.Cookies collection is back down to 2 cookies – one Client and one Server.  Awesome!  Also notice that the Server cookie returned has the later of the two timestamps shown for the Server cookies in the previous screenshot.  In other words, my Server logic is, in fact, overwriting the Server cookies.  Doubly-awesome!

Ok, so one puzzle solved.  Now I just have to figure out why my Server cookies are still not overwriting the Client one.

For this, I had to dig way, WAY back in my brain, back to my early days of working with the web.  I recalled that there were some other options when creating cookies that would allow them to be read or hidden from portions of your application.  Cookies were tied to the domain that created them, but even within the domain you could separate them out by a property called “path”.  Perhaps the path for these two cookies wasn’t identical, so the browser was treating them as separate creatures.  I checked the property in Studio:

image

Nuts.  Both cookies have the same path.  Well, it was a good try.  I examined the other properties on the two cookies, and couldn’t find anything else that should have been differentiating them.

By this point, I had been falling down this rabbit hole for over a day.  While I had made some clear progress at understanding how cookies were handled, I still hadn’t solved the core issue.  I decided to get another pair of eyes on this issue.  My colleague, Ron, obliged.

His first thought was to pull the site up in Firefox, and turn FireCookie loose on it.  He, too, wanted to examine the cookies, but came at it from a completely different angle than I had.  That turned out to make all the difference in the world.  Here’s what these two cookies looked like in FireCookie:

image

You’ve GOT to be kidding me.  The Path properties WERE different after all.  Visual Studio wasn’t reporting the Client cookie’s Path correctly!

Armed with that information, the solution was easy.  I simply modified the Client cookie logic to explicitly set the Path property to “/”:

function setCookie(c_name, value, exdays) {
   var exdate = new Date();
   exdate.setDate(exdate.getDate() + exdays);
   var c_value = escape(value) + ((exdays == null) ? "" : "; expires=" + exdate.toUTCString()) + "; path=/";
   document.cookie = c_name + "=" + c_value;
}

Notice the addition of “path=/” to the end of the c_value variable. Once I did that, my Client and Server “background” cookies could now be viewed as one and the same. That allowed them to overwrite, and preempt each other.

 

Emerging from the rabbit hole

Beyond learning how the Request and Response cookie collections work, I learned a couple of valuable lessons here, both regarding the Visual Studio debugger.  The first is that is that a round-trip to the browser is apparently required for the debugger to de-dup the list of cookies shown in the Request.Cookies collection.  That was terribly confusing for me.  I was expecting to see the new Server cookie simply overwrite the old one – without having to go down to the browser to do it.

The other lesson learned is that I can’t trust the debugger to accurately report the “Path” property for cookies (whether it can’t report the correct Path for some reason, or it is simply a bug, it’s confusing either way).  I need to use a client-side tool like FireCookie to do that.

July 22, 2011 Posted by | ASP.NET MVC, Visual Studio/.NET | Comments Off on One too many cookies – Visual Studio and FireCookie

Making sense on many levels – ASP.NET MVC 2 and Model-Level Error Reporting

In the previous episode of “Mark and the Chartreuse-Field Project”, Mark was working to get site-wide error reporting up and running.  Today, Mark tackles model-level error reporting.  Let’s tune in and see how he’s doing.

***

Early on in this project I learned how to associate a custom error message with a specific form field:

ViewData.ModelState.AddModelError("MyField", "My custom error message here")

That allowed the out-of-the-box validation messaging to highlight the offending field and display the message right next to it:

<%=Html.ValidationMessageFor(Function(model) model.MyField)%>

However, today I ran into a situation where I needed to be able to display a custom message that applied to the entire page, not a specific field.  There was a validation summary control at the top of the page already:

<%= Html.ValidationSummary(True) %>

My first thought was to associate the custom message with that control so it would appear there.  Following the pattern to associate a message with a specific field, my first attempt at code-roulette looked like this:

ViewData.ModelState.AddModelError("", "My custom error message here")

I then threw a dummy exception in the middle of one of my controller actions to force an error to appear.  No dice – the error was simply swallowed by the page.  I did some searching and came across a post (http://stackoverflow.com/questions/4017827/manually-adding-text-to-html-validationsummary) that mentioned using an asterisk as the field name to get the message to show up in the validation summary:

ViewData.ModelState.AddModelError("*", "My custom error message here")

Still no dice.  Ok, time to back up a minute.  Was my custom error even making it into the ModelState object?  I put a breakpoint on the line immediately after AddModelError, and inspected the ViewData.ModelState.Keys property:

(0): "id"
(1): "CurrentEvent.Title"

(17): "*"

So, if the error is getting added to the ModelState correctly, then why isn’t it showing up?  The answer came in two parts.  First, I was passing a “True” to Html.ValidationSummary.  This was configuring the control to not display messages that were tied to a field (the actual parameter name here is “excludePropertyErrors”).  My asterisk was being treated like another field name – one that didn’t match a field-level validation control – and was therefore being excluded by the validation summary control.  Second, a post from the ASP.NET forums (http://forums.asp.net/p/1628537/4193163.aspx) suggested that displaying model-level errors were not possible in MVC 2, and apparently were in MVC 3.  Upgrading wasn’t an option for me, so I needed to find another way.

What if I were to create a new model property called ErrorMessage, and then associate the custom messages with THAT field?  Then, I would just need to add a validation control for that field to my view.  I added the property, “ErrorMessage”, to my model and modified my AddModelError call like so:

ViewData.ModelState.AddModelError("ErrorMessage", "My custom error message here")

Then I added a Html.ValidationMessageFor control to my view, and tied it to this new property:

<%=Html.ValidationMessageFor(Function(model) model.ErrorMessage)%>

And voila!  Model-level errors in MVC 2!

March 2, 2011 Posted by | ASP.NET MVC, Visual Studio/.NET | 4 Comments

Can you please page Mark for me? ASP.NET DataGrid and Custom Paging

One of the main reasons I maintain a technical blog is to document how I did something, because inevitably I’ll need to do that same thing three more times in my career, but those three times will be so far apart in time that I’ll forget what I did.  Having said that, I maintain no illusions that I write about anything novel here – please assume this is just me being slow, and that I’m giddy about figuring out what the rest of you already knew.  For today’s episode, I’m writing about custom paging with DataGrids.

***

I’ll admit it.  I rushed.

I needed to build an admin tool that showed a list of items to be approved or rejected, and I decided to just slam a collection into a DataGrid, add a few buttons, and call it a day.  On the off-chance that we got a lot of submissions, I turned on the DataGrid’s default paging so the admin wouldn’t have to scroll through hundreds of entries at a time on one long page.  Ironically, it was the fact that we got hundreds of entries that caused the issue.

The issue was that the admin tool timed out before showing even the first page.  Because I used the grid’s built-in paging controls, it was attempting to bring back the ENTIRE data set every time, but only show one slice of that data (the page requested).  It worked great when I had 15 records in my development database.  Didn’t work out so well when I had 800 (part of each record was a BLOB, so while 800 seems like a small number, there was actually a lot of bytes coming back).

In my rush to get the admin tool out the door, my shortcut turned into a paper cut – small, seemingly innocuous, and very painful.

The solution was to modify the grid and the stored procedure to only bring back and attempt to display one page of records at a time.  The first part was to modify the stored procedure and the .NET code that invokes it to take parameters for the page requested and the number of records to include on each page.  That was relatively straightforward.

The second part was to wire up the DataGrid control to call this stored procedure with the correct page each time.  That required a little more work.

I first handled the grid’s PageIndexChanged event, and called my “Load” method using e.NewPageIndex, the 0-indexed page of data being requested.  The “Load” method would then set the DataGrid.CurrentPageIndex property to that value.  After that I would call my stored procedure and bind the results to the DataGrid as before.

At this point, the grid displayed with a number “1” in the corner as I had intended and the first page of results was showing, but even my development database had more than one page of data.  Where were the other page numbers?  After some reading, especially this post, I found that I needed to set the DataGrid’s VirtualItemCount property to the total number of items in the result set.  That combined with the DataGrid’s PageSize property (which I had already set), it could properly render all the page numbers.

And it did.

Except that clicking on any of them merely returned the first page of data again.  After more reading, I found that there was another property on the DataGrid that needed to be turned on.  I had set the AllowPaging property to True, but I found that I also needed to set the AllowCustomPaging property to True.  Once I did that, the page numbers mapped to the proper page of data.

And a cooling salve was applied to the paper cut.

February 18, 2011 Posted by | Visual Studio/.NET | Comments Off on Can you please page Mark for me? ASP.NET DataGrid and Custom Paging

You can’t tell me what the error is because there was an error? ASP.NET MVC 2 and ELMAH

I just started a new project this week – a chartreuse-field project.  That is, it’s a greenfield project where we’re scrapping the existing site and replacing it with a new one, but with some legacy pieces being brought forward.  Why not call it a “brownfield project with heavy maintenance”?  Because we’re moving away from ASP.NET WebForms and going with the ASP.NET MVC 2 framework.  (The client is running their systems on .NET 3.5, which means we can’t go with the newer MVC 3 framework yet.  Sigh.)

While I had specific reasons for pushing the site in this direction, my decision to go this route was seriously called into question almost right out of the gate.  I was trying to configure ELMAH to work with MVC, and it was bothering me how difficult it was becoming.  Honestly, I think most of the difficulty was getting used to the MVC mindset again.  My first and last only experience with ASP.NET MVC was building the Microsoft Developers of Southwest Michigan user’s group site, http://DevMI.com, but that was a year and a half ago.

The core issue was getting the site to render my custom “Error” view when an unhandled exception occurred.  I was able to get the view to render by browsing to ~/Error.aspx/Unknown, so I used that in my web.config:

<customErrors mode=”On” defaultRedirect=”~/Error.aspx/Unknown”>
<error statusCode=”404″ redirect=”~/Error.aspx/NotFound”/>
</customErrors>

I then tried to test it by throwing a dummy exception in the home page’s controller.  The generic 500 page came up, not my custom one.

Grrr.

I spent a couple of hours researching this before I came across these two posts:

http://devstuffs.wordpress.com/2010/12/12/how-to-use-customerrors-in-asp-net-mvc-2/

http://www.hanselman.com/blog/ELMAHErrorLoggingModulesAndHandlersForASPNETAndMVCToo.aspx

Both of these posts mention the HandleError attribute that can be applied to both a controller class and the methods within.  What seemed to be happening was that this was preventing my custom error page (as defined in the web.config) from rendering.

I had started with the basic out-of-the-box MVC 2 template site, and then started trimming back what I didn’t need.  After reading these articles, I took a look at the HomeController class, and sure enough the HandleError attribute was applied.  I removed it and hit the home page again.  My custom “Error” view finally rendered.

Now that that was working, I dropped the other ELMAH configuration pieces in place in the web.config.  These mostly worked as they had in the past – I was able to see the custom error page, ELMAH was recording the exception on the file system, and it was generating an email to me with the exception – great!  We’re making progress!  There was just one more piece to my standard ELMAH implementation – the error report page.

With every site that I use ELMAH on, I configure the /errors/report.axd virtual page to show the list of the ELMAH exceptions that have happened on that site.

Untitled

This allows me to go back through the Production logs to get a better feel for recurring issues.  Since Production has multiple, load-balanced servers, having a separate error report that is tied to each server allows me to identify problems that are occurring on only that web server.

With the new MVC site, however, my first attempts to browse to ~/errors/report.axd resulted in a 404.  After a little thought, I theorized that ASP.NET was seeing the request for this page and first checked to see if it existed as a file on the file system.  When it didn’t it moved on to the routes I defined in Application_Start in Global.asax.  It didn’t find a route for that file, but it did see my “catch all” route at the bottom:

Public Class MvcApplication
Inherits System.Web.HttpApplication

Shared Sub RegisterRoutes(ByVal routes As RouteCollection)
routes.IgnoreRoute(“{resource}.axd/{*pathInfo}”)

‘ MapRoute takes the following parameters, in order:
‘ (1) Route name
‘ (2) URL with parameters
‘ (3) Parameter defaults
routes.MapRoute( _
“Default”, _
“{controller}.aspx/{action}/{id}”, _
New With {.controller = “Home”, .action = “Index”, .id = UrlParameter.Optional} _
)

routes.MapRoute( _
“Catch All”, _
“{*path}”, _
New With {.controller = “Error”, .action = “NotFound”} _
)

End Sub

Sub Application_Start()
AreaRegistration.RegisterAllAreas()
RegisterRoutes(RouteTable.Routes)
End Sub
End Class

The catch all says “anything that doesn’t match one of the above rules should render the 404 page”.  And that was exactly what I was getting.  The search for report.axd stopped there because it finally matched a routing rule, and therefore the request never made it to ELMAH.

It was then that I noticed the routes.IgnoreRoute rule at the very top.  That looked interesting.  What if I added one like that for /errors/report.axd?  I added this as the second rule in the RegisterRoutes method:

routes.IgnoreRoute(“errors/report.axd”)

I tried the page again, and this time it worked!

Well, sort of.  The page rendered, but not any of the usual styling.  I fired up Fiddler and hit the page again.  I saw that requests for both errors/report.axd and errors/report.axd/stylesheet.  Aha!  I bet my rule doesn’t cover that.

I looked at the original .IgnoreRoute rule, and saw that the were using (*pathInfo) in the definition.  That looked a lot like a wildcard, so I tried adding a variant of my first rule to the list.

routes.IgnoreRoute(“errors/report.axd”)

routes.IgnoreRoute(“errors/report.axd/(*pathInfo)”)

Hey, look at that!  The page renders beautifully now!  Ok, now it was time to tempt fate.  I wondered if the first rule was actually a special case of the second, and if the second would actually cover everything.  I commented the first rule out and tried the page again.  Success!  The page still looks and functions beautifully.  I removed the first rule and committed everything.

My initial doubts were laid to rest.  Onward!

February 5, 2011 Posted by | ASP.NET MVC, Visual Studio/.NET | Comments Off on You can’t tell me what the error is because there was an error? ASP.NET MVC 2 and ELMAH

RedGate Reflector Announcement

Here is a copy of an email I sent earlier this evening to RedGate, in response to their announcement regarding Reflector (http://www.red-gate.com/products/dotnet-development/reflector/announcement):

 

***

To: Info@Red-Gate.com
Subject: Reflector free version being discontinued?

Earlier today I came across an announcement on the RedGate .NET Reflector landing page regarding the future of the product.  Your decision to make version 7 a paid-product is a little disappointing, but not really surprising.  You are, after all, a commercial enterprise, and I understand the need to charge for the products and services you provide – even if they were previously made available to the development community free of charge.

What disturbs me, however, is your decision to end our ability as developers from using the earlier versions of the tool after May 30, 2011, according to the FAQs on this announcement (http://www.red-gate.com/products/dotnet-development/reflector/announcement-faq ):

Q: How much longer will I be able to obtain and use a free version of .NET Reflector?
A: A free version will be available for download until the release of Version 7, scheduled for early March. The free version will continue working until May 30, 2011.

Do I understand this correctly?  If I were to fire up Reflector on May 31, and receive the "a new version is available, would you like to update now?" message, and I respond "No", will the tool continue to function?  Or will it say "sorry, you must upgrade to version 7 to continue using this tool", which will require a minimum payment of $35?

I look forward to your clarifying response.

Mark Gilbert.
Co-Coordinator
Microsoft Developers of Southwest Michigan
http://DevMI.com

***

 

Update 2/5/2011: Anthony from the RedGate .NET Reflector team responded in less than a day:

“Just to clarify your question, the free version will continue to work until May 30th 2011. So you will need to upgrade to the new version after this period.”

Terribly disappointing.  I responded to Anthony, and very pointedly argued how terrible an idea to time-bomb version 6 was, and what two possible solutions would be – remove the time-bomb from version 6 and/or create a stripped-down, free edition of version 7.

The RedGate forums have been hopping since the announcement, and there are a lot of people very upset about this.  It’s also interesting that in all of the feedback, there has been very little response by RedGate, at least on the forums.  My hope is that the folks at RedGate HQ are reviewing this feedback and are formulating a change to their approach.

February 3, 2011 Posted by | Tools and Toys, Visual Studio/.NET | Comments Off on RedGate Reflector Announcement

An image is worth a thousand bytes: Images and Data URIs

One of my current projects involves handling images generated from within Flash.  Early on, I made an architectural decision to store those images in the database rather than on the file system of the web server.  Since I was interfacing with Flash, I wrote a series of FluorineFX Web services to save and retrieve the images and associated data.  Data from Flash would come in as a FluorineFx.AMF3.ByteArray and get converted to a .NET Byte array for storage; retrieval would reverse the process.

Everything was just peachy-keen until I had to display those same images in a .NET DataGrid.  My business layer already had the functionality to return galleries of images in one fell swoop.  All I was trying to do was bind that output to the DataGrid, but I wasn’t sure how to bind a Byte array to something that would allow it to render as an image in the browser.

Doing some searching on the internets turned up a very interesting mechanism with the <img /> tag.  Normally, this tag’s “src” attribute needed to point to a URL of some sort.  If nothing else worked, I could always create a handler (GetImage.ashx, or something similar) that would take an ID of the image I wanted to show, retrieve it from the database, and then write it out to the browser.  This handler would become the URL for “src”.  But that would mean the page would have to make multiple requests to the web server, one for each image.  I already had all of the image data pulled together, and wanted a more direct way to render them – as images – to the page.

The interesting find for “src” was that several browsers support inline images, which allow me to set the data for the image (as a Base 64 string) as the value for “src”.  That allowed me to do this in my DataGrid:

<img src="data:<%#DataBinder.Eval(Container.DataItem, "MimeType")%>; 
base64, 
<%# Convert.ToBase64String(DataBinder.Eval(Container.DataItem, "ImageBytes"))%>" />

(The additional carriage returns are presented here for clarity only.)

The “src” tag here breaks down into three parts:

  1. The “data:” directive, which tells the browser that there is a string of bytes coming down, rather than a URI.
  2. The MIME type for the image to be displayed, followed by a semi-colon.  I store this alongside the image data in the database, so this was just another field in my business object to be bound here.
  3. The keyword “base64”, followed by a comma and the string of bytes itself.  The latter is the Byte array returned from the database, converted to a Base64 String.

The feature goes by the more formal name of “data URI”, and has been supported by most browsers for a good long time (see this forum post from 2004, which is what got me going down this road to begin with: http://www.velocityreviews.com/forums/t89692-how-to-display-in-memory-image.html). 

I say “most” because Internet Explorer prior to version 8 does not support this at all.  Version 8 supports it in a limited way – only certain tags, and a limit of 32k for the file size (see http://www.websiteoptimization.com/speed/tweak/inline-images/ or http://en.wikipedia.org/wiki/Data_URI_scheme).  Internet Explorer 9 reportedly removes many of those restrictions (again, see http://en.wikipedia.org/wiki/Data_URI_scheme).

The advantage here is that I don’t have to write a handler page just to retrieve the image – I can directly drop these into the DataGrid.  That means fewer requests to the server to get the images down to the browser.  However, one of the disadvantages is that the images can no longer be cached by the browser – the image data has to be sent down with every request.  That didn’t really affect my particular scenario – I was building an admin tool for a community manager to review images submitted by the users, and each image would only likely be viewed once (during the actual review) by that one person – but this may not be a good approach for a public facing site getting lots of visits. 

Some of the articles I was reading suggested using this technique for very small images – images where the amount of data being sent is so small that it makes it more cost effective to just send the data down with the rest of the markup and save an additional request to the server.  With larger images, like photos, it probably makes more sense to map a unique URI to each image so the browser (or a CDN) can cache it.

At any rate, while this was not new for the web in general, it was new to me and I was so impressed with how easy this feature made my job that day that I wanted to share.

January 24, 2011 Posted by | Visual Studio/.NET | Comments Off on An image is worth a thousand bytes: Images and Data URIs