Mark Gilbert's Blog

Science and technology, served light and fluffy.

Moving back from the Cloud

Back in August I mentioned moving my critical files to the cloud so I’d have them available from both home and at work.  At the time I selected OneDrive to be my cloud storage provider, and mapped a network drive on both machines so all of my shortcuts would work.

While it solved the problem of losing a very tiny thumbdrive, it introduced some serious lag to Tasks, my custom-written task management app.  Every time I needed to create a new task, update an existing one, or reprioritize the things on my todo list, there was at least 2-3 seconds (and sometimes a lot more) delay in committing that change to OneDrive.  Very soon after I began using OneDrive like this, I started to think through how I could hide more of this lag in the background of Tasks so it would be less noticeable.

Then last week I started getting permission errors from OneDrive.  After some tinkering, I found that I could no longer even drag files into the drive letter I had mapped, or directly modify files through the drive letter, without it complaining about permission issues.  If I did everything through the browser, it was fine, but not through my drive letter.

I spent a night troubleshooting it, and then I pulled out my old thumbdrive as an interim solution.  I went a week like that, hoping that Microsoft would sort out whatever it was they had changed.  This weekend I retested it, and found it was still misbehaving.  I took another look at using Google Drive, but apparently you need some third-party software to map a drive letter to it, so I abandoned that idea.

That’s when I took a hard look at what I had in the cloud, and found that I either didn’t really need access to those files at work, or came up with alternatives (like the occasional email to myself).

Today was my first day 1) not carrying a thumbdrive, and 2) not relying on the cloud for anything relating to my todo list.  To my surprise, I barely noticed the difference.  I felt just as productive, and no longer experienced any of the lag I was seeing with saving things to OneDrive.  I still use Tasks at home, though, but the files are now stored on my local server, rather than in the cloud, so again – no thumbdrive and no lag.

Am I annoyed with OneDrive for ceasing to work?  A little, but being forced to give it up allowed me to ditch the lag at the same time, so I won’t be annoyed for long.

January 30, 2017 Posted by | Tools and Toys | Comments Off on Moving back from the Cloud

Dropping a mic – and picking up another

In the middle of our most recent podcast we had a major technical failure.

10

The microphone Katherine, Lucy, and I have used for the first 32 podcasts finally gave up the ghost.  To be fair, it had broken a couple of times before, and I had apply liberal amounts of glue to piece it back together.  This time, though, half of the audio tracks Lucy and I recorded had too much static to be usable.

So, we said goodbye to Computer Associates, and hello to Tonor.

20

This is a Tonor 3.5mm Cardoid Condensor Microphone.  We recorded all of the tracks for Episode 33 with the Tonor, and we think it sounds at least as good.  Let us know what you think!

January 11, 2017 Posted by | Podcast, Tools and Toys | Comments Off on Dropping a mic – and picking up another

Moving to the Cloud

For many years, I’ve carried what you might call my life’s work on a thumb drive.

10

This 4GB drive held sample code I put together, documents I’d written, all of the data files for Tasks, my personal log – everything.  It was getting backed up every night to the cloud, but the primary source of truth was always this drive.

Then a couple of months ago I got a new laptop at work, mostly because I found myself needing to travel more frequently – whether it was down the hall to a conference call, or out of state for an onsite.  Since this was my primary machine, I would faithfully plug this thumb drive into the side.  However, it would stick out so far I was always afraid I would snag it on something, and bend or otherwise damage it.

So I replaced it with this tiny SanDisk.

20

My thinking was that it would keep a low-enough profile that I wouldn’t need to worry about it catching on anything.  I wasn’t wrong there, but moving to such a small thumb drive had an unexpected consequence.  I had to pay extra attention to where the thing was when it wasn’t a) plugged into my computer, or b) zipped up in my laptop case.

In other words, it was so tiny I was in danger of losing it every time I put it in my pocket. 

That thought increasingly nagged me.  Then one morning I walked out to my car and it fell out of my pocket when I retrieved my keys.  When I got to the car, I patted my pocket to make sure it was there, and it wasn’t.  I hurriedly retraced my steps, and found it in the garage.  I never even heard it hit the ground because the noise from the garage door opener masked the impact.  I resolved then and there to find a better solution.

I took a long hard look at what was on the drive, and what I actually needed with me:

  • I was keeping several software installers and configuration files on that drive.  Those were easy to move to my home server.  If I actually needed them at work, I could always wait a day, and bring in what I needed the next morning.
  • The next batch of files comprised the bulk of the contents of the drive.  They were files I rarely dipped into, and more times than not was actually at home when I did.  Those also got moved onto my home server.
  • Then there were a handful of files that I would actually need at work.  These were rarely (if ever) needed at work, so those got left on my corporate user’s drive.

Then it came down to the files that I would needed regularly at both home and at work – the files that Tasks required, and my personal log.  I didn’t want to load those onto my phone because then I’d have to keep it jacked into my computer to access them.  I also discarded any "syncing" solution due to my previous bad experiences.

That left me with putting these files in the cloud.  Would it be possible to map a drive letter to a folder somewhere (a requirement to keep Tasks work as is), and let me access them from home or at work?  I looked at both Microsoft’s OneDrive and Google’s Drive, and as it turns out both allow me to map a local drive letter.  I ultimately went with OneDrive since I already had files up there from past projects. 

With these files in the cloud, I had completely weaned myself off of thumb drive.  I could now fire up Tasks at home or at work, and get to all of my project notes.  The biggest downside has been the lag – saving files to the cloud is substantially slower than saving them on my thumb drive.  I’ve started thinking about ways to modify Tasks to do its saving-to-the-cloud in the background, making it more responsive.

I have so far had one occasion where I lost a file update (memories of August 2009 came rushing back – see the links above).  I wasn’t sure what triggered it, but somewhere along the way one of my most critical project files got completely wiped out – a 0-byte was all that remained.  I had a relatively recent update, and only ended up losing a few hours’ worth of work, but as a result, I’ve tweaked my regular backup to pull these files down from OneDrive, nightly.

All in all, the move to the cloud seems to be working fairly well, and it certainly renders the question of "what do I do if I lose my thumb drive?" moot.

August 13, 2016 Posted by | Tools and Toys | Comments Off on Moving to the Cloud

Building Tools

For the better part of a year, I’ve been trying to build a system to capture and analyze data points – on me.  This is part of a long-shot plan to find out what might be contributing to my headaches.  I’m pleased to say that since July of 2015, I’ve been successfully capturing over a dozen data points on myself, multiple times a day.

I started out with the easiest thing that could work: an alarm clock telling me to take a survey every 15 minutes, and a Windows desktop app called iSelfSurvey that saved its data to a text file.

10

Then, in mid-January, I launched an Android version of iSelfSurvey that allows me to capture data outside of the 7:30-5 that I’m at my computer at work.

20

 

30

Late last year, I also built a companion application called iSelfAnalysis that allows me to upload those data files, and then run a number of functions on the data, looking for patterns.

40

For example: is there any correlation of the severity of my headaches to the number of hours of sleep that I got the night before?  Is there any correlation to the amount of liquid I’ve been drinking?  How about if my blood sugar takes a dive 4 hours ago – does that affect my headaches now?  I’ve only begun to scratch the surface of the kinds of questions I can ask – and now answer.  I now have the tools in place to run experiments on myself – experiments where I adjust one data point and observe how my headaches react.

Is this system over the top?  Have I spent too much time sharpening my axe?  No.  Asking anyone to answer a dozen questions every 15 minutes is moderately intrusive at best.  Trying to analyze the data by hand would get old after about a day.  This system takes as much of the pain out of this process as I can manage, and makes it far more likely that I’ll continue day after day after day.

I have 127 days’ worth of data collected so far – thousands of data points.  I don’t expect to find any smoking guns, but if I can find ways to minimize my headaches, I’ll consider that a win.

January 30, 2016 Posted by | Science, Tools and Toys | Comments Off on Building Tools

New hardware – 3.5” Floppy Drive

I think I now have the (somewhat dubious) honor of being possibly the only person on the planet who is running Windows 7…

With an internal 3.5″ floppy disk drive.

Resized

There she is.  My new “A” drive:

Windows Explorer

Why you ask?

I needed to pull several old files off an old computer (I think it was solidly a teenager) running an even older operating system – Windows 95.  The machine had no network card, did not support USB, and had a CD-ROM – emphasis on the “Read-Only”.  The 3.5” floppy on that machine worked fine, and I had a small number of disks for it.  I had a spare floppy drive sitting around, so I thought I’d try that first.

When I opened my primary machine, to my great astonishment I found that the ASUS motherboard in it actually had a legacy floppy controller.  I plugged the drive in, and it recognized it immediately.  I proceeded to transfer the files off the old machine, circa 1993 sneaker-mail.

Now, off-hand, do you remember what a 3.5” floppy drive in use sounds like?  Well, for your listening delight, here are some sounds from the last century:

That brings back some good memories.

October 24, 2012 Posted by | Tools and Toys | 2 Comments

Experimental road work ahead

On Saturday, I repaved my machine – completely reformatted and reloaded the OS and all software.

I didn’t decide to do this until Saturday morning.  The night before I had tried to get my Kinect hooked up as a web cam for use with Skype.  I mean, it’s a video camera, right, how hard could it be?  Famous last words.  As it turns out, the Kinect camera doesn’t register itself with the OS in a way that Skype can recognize it as a web cam.  Then I found a guy who wrote a shim called KinectCam.ax that addresses this very issue.  The .ax file is a binary that needs to be registered with the machine, and after several failed attempts, I finally managed to get it registered.

But Skype STILL wouldn’t recognize that a camera was attached.

This was the latest in a line of things I’ve tried to do with my machine over the last couple of years that hasn’t worked despite my best efforts.  I didn’t have the stomach to sit through another marathon debugging session, so I purchased a Logitech web cam and hooked it up to my wife’s computer.  Within 15 minutes we had the camera hooked up and working, and were happily Skyping away.  To be fair, I did not try to hook the Kinect up to my wife’s machine, or try to register KinectCam.ax on it.  It’s entirely possible that it would have worked on her machine.  But on that Friday night, I hated computers.

I woke up Saturday morning with a fresh thought.  I’ve had many more things not work on my machine than on my wife’s machine, and the main difference is the edition of Windows 7 – hers is 32-bit while mine is 64-bit.  Perhaps if I had 32-bit on my machine, I wouldn’t have as many problems.  So, I decided to repave it, and load Windows 7 32-bit.  This would be my first of three experiments – how big a deal it is to load various pieces of software onto my machine?

Since I was starting fresh, I decided to scrap another piece of my original master plan.  When I first loaded Windows 7 on this computer, I actually installed it twice – the first was the "base" OS, and then I loaded a Windows 7 virtual image on top of it.  My thought was that I could periodically save off the virtual image, with all of my software and configuration settings, to an external hard drive.  Then, if I ever needed to reformat my machine, I just load the base OS, copy the virtual OS file image over, and fire it up.  Voila!  Nearly-instant computer.  While I did periodically back my virtual image up, I never found a need to reload it.  And because the virtual image was, well, virtual, I was losing out on at least some of the power of my machine.  So this time, I just installed a single copy of Windows, right onto the metal.

Now, because I’ve loaded Windows 32-bit rather than 64-bit onto the metal, I’m losing out on almost half of the 6GB of RAM in the machine, but this leads to my second experiment – how does Windows 32-bit running on the metal with 3+ GB of RAM compare to 64-bit running virtually with 6GB of RAM?  The answer so far appears to be "slower".  My screen saver of choice for many years now has been SETI@Home.  Last week on 64-bit / 6GB, it would run very smoothly, and with no noticeable jerkiness.  Now, with 32-bit / 3 GB, it is VERY jerky.  In fact, the animation comes to a stop every few seconds.  Since the processor in both of these tests is the same, I can only conclude removing almost 3 GB of available RAM is the problem.  SETI@Home isn’t a primary application for me, so I can survive if that’s a bit slower.  I’ll have to see how various games like LEGO Harry Potter perform.  I may decide to go back to 64-bit before this year is out, which leads me to my third experiment…

How little software can I get away with loading?

For many years now, I’ve maintained a growing list of software to install on my computer.  I’ve averaged about one full reload – either a work machine or a home machine – every year for over a decade.  In having to do it about once a year, it quickly became apparent that I couldn’t keep track of all of the various utilities, applications, and tools I use, let alone the order that they need to be installed in, where to find them, or what the keys are for each.  I started a document to track all of that.

My work machine is definitely a beast when it comes to things I use on a regular or semi-regular basis – so much so that if I started from scratch, and even if I had everything at hand, it would still take me the better part of two days to reload my machine.  My home setup isn’t much better.  I didn’t want to go through that this time around.  I decided to load just what I knew I would need to use in the next two weeks, and the rest of the list will be loaded on an as-needed basis.

The upside here is that it drastically cuts down on the time needed to install the OS fresh.  That means that if I do decide to reload Windows 64-bit, I’ll only be out a few hours.  The possible downside is how annoying it may be to load something new up when I discover I need it.

***

The other thought that has been in the back of my mind is how much of what I use on my machine could actually be put in the cloud?  Already, I use cloud-based applications for email, a personal wiki, calendaring, my contact list, and so on.  If I could push more of what I do into the cloud, it means less computer is needed, and a much faster reload time.  Of course, the cloud has its own drawbacks – if you can’t connect to the interwebs, you’re sunk.

Baby steps, Mark.  Baby steps.

July 11, 2012 Posted by | General, Tools and Toys | Comments Off on Experimental road work ahead

RedGate Reflector Announcement

Here is a copy of an email I sent earlier this evening to RedGate, in response to their announcement regarding Reflector (http://www.red-gate.com/products/dotnet-development/reflector/announcement):

 

***

To: Info@Red-Gate.com
Subject: Reflector free version being discontinued?

Earlier today I came across an announcement on the RedGate .NET Reflector landing page regarding the future of the product.  Your decision to make version 7 a paid-product is a little disappointing, but not really surprising.  You are, after all, a commercial enterprise, and I understand the need to charge for the products and services you provide – even if they were previously made available to the development community free of charge.

What disturbs me, however, is your decision to end our ability as developers from using the earlier versions of the tool after May 30, 2011, according to the FAQs on this announcement (http://www.red-gate.com/products/dotnet-development/reflector/announcement-faq ):

Q: How much longer will I be able to obtain and use a free version of .NET Reflector?
A: A free version will be available for download until the release of Version 7, scheduled for early March. The free version will continue working until May 30, 2011.

Do I understand this correctly?  If I were to fire up Reflector on May 31, and receive the "a new version is available, would you like to update now?" message, and I respond "No", will the tool continue to function?  Or will it say "sorry, you must upgrade to version 7 to continue using this tool", which will require a minimum payment of $35?

I look forward to your clarifying response.

Mark Gilbert.
Co-Coordinator
Microsoft Developers of Southwest Michigan
http://DevMI.com

***

 

Update 2/5/2011: Anthony from the RedGate .NET Reflector team responded in less than a day:

“Just to clarify your question, the free version will continue to work until May 30th 2011. So you will need to upgrade to the new version after this period.”

Terribly disappointing.  I responded to Anthony, and very pointedly argued how terrible an idea to time-bomb version 6 was, and what two possible solutions would be – remove the time-bomb from version 6 and/or create a stripped-down, free edition of version 7.

The RedGate forums have been hopping since the announcement, and there are a lot of people very upset about this.  It’s also interesting that in all of the feedback, there has been very little response by RedGate, at least on the forums.  My hope is that the folks at RedGate HQ are reviewing this feedback and are formulating a change to their approach.

February 3, 2011 Posted by | Tools and Toys, Visual Studio/.NET | Comments Off on RedGate Reflector Announcement

My buffer runneth over

In the new release of NAntRunner, one of the new features is a checkbox to add the -verbose switch to the NAnt call. In the course of testing it, I found that NAnt would hang when I ran a non-trivial build and deploy script. When NAnt.exe hung, it locked up NAntRunner, too. If I ran that same script from the command line using NAnt (bypassing NAntRunner), it completed fine.

Through some tinkering I found that once NAnt.exe hung, I could kill it off using Task Manager and then NAntRunner would give me control back. I started doing some searching for “process.start hung” and “process.start doesn’t exit”, and eventually came to a couple of articles that talked about the output buffers filling up:

http://www.velocityreviews.com/forums/t123227-console-application-hangs-when-called-from-processstart.html

http://stackoverflow.com/questions/439617/hanging-process-when-run-with-net-process-start-whats-wrong

Both of these discuss redirecting the buffers for a spawned process. As it turns out, the buffers have a 2K limit and if they fill up, the process hangs. That sounded awfully familiar.

All versions of NAntRunner have redirected the standard output and error buffers for the spawned process so I could display the contents of those buffers in the NAntRunner interface. Before 0.4, I would wait until the process finished and then get the entire contents of the buffer:

Class NAntProcess
    …
    Public ReadOnly Property StandardError() As String
        Get
            Return Me._SystemProcess.StandardOutput.ReadToEnd
        End Get
    End Property

    Public ReadOnly Property StandardError() As String
        Get
            Return Me._SystemProcess.StandardOutput.ReadToEnd
        End Get
    End Property
    …
End Class

When I started using the verbose switch, the NAnt process hung because the process generated much more output than before (imagine that, “verbose” equals “more”), and the output buffers filled up before the process finished. So, I had to change how I was grabbing the output messages off.

The VelocityReviews.com link above mentions BeginOutputReadLine, so I did some more digging and came up with this MDSN article which shows how to pull these messages off asynchronously:

http://msdn.microsoft.com/en-us/library/system.diagnostics.process.beginoutputreadline.aspx

This involved handling the OutputDataReceived and ErrorDataReceived handlers for the process, and then appending the messages as they were generated using a pair of StringBuilder objects. The modifications looked like the following (only the relevant portions of the NAntProcess class are shown):

Class NAntProcess
    Private _StandardOutputBuilder As StringBuilder
    Private _ErrorOutputBuilder As StringBuilder
    Public Sub New(ByVal NAntExecutablePath As String, _
                   ByVal NewScriptToRun As String, _
                   ByVal NewTargetFramework As String, _
                   ByVal NewBuildTarget As String, _
                   ByVal ShouldEnableVerboseMessages As String, _
                   ByVal NewOtherArgs As String)
        …

        ‘ Configure the asynchronous output collection
        Me._StandardOutputBuilder = New StringBuilder
        Me._StandardOutputBuilder.AppendLine(Me._NAntExecutable & ” “ & Me.GetArgumentsForNAnt)
        Me._StandardOutputBuilder.AppendLine()
        Me._StandardOutputBuilder.AppendLine()
        Me._ErrorOutputBuilder = New StringBuilder
        AddHandler Me._SystemProcess.OutputDataReceived, AddressOf Me.StandardOutputHandler
        AddHandler Me._SystemProcess.ErrorDataReceived, AddressOf Me.ErrorOutputHandler
    End Sub

    Private Sub StandardOutputHandler(ByVal sendingProcess As Object, ByVal outLine As DataReceivedEventArgs)
        If Not String.IsNullOrEmpty(outLine.Data) Then
            Me._StandardOutputBuilder.AppendLine(outLine.Data)
        End If
    End Sub

    Private Sub ErrorOutputHandler(ByVal sendingProcess As Object, ByVal outLine As DataReceivedEventArgs)
        If Not String.IsNullOrEmpty(outLine.Data) Then
            Me._ErrorOutputBuilder.AppendLine(outLine.Data)
        End If
    End Sub

    Public ReadOnly Property StandardOutput() As String
        Get
            Return Me._StandardOutputBuilder.ToString
        End Get
    End Property

    Public ReadOnly Property StandardError() As String
        Get
            Return Me._ErrorOutputBuilder.ToString
        End Get
    End Property
End Class

After I got this implemented, I realized that I wouldn’t need the verbose switch to cause this problem. In theory, the “regular” output from a very involved NAnt script could cause the buffers to fill up. It’s just a fluke that I hadn’t hit this up to this point.

September 7, 2009 Posted by | Tools and Toys, Visual Studio/.NET | 2 Comments

NAntRunner 0.4 Released

Meet the new kid

I just released NAntRunner 0.4, now available at http://CodePlex.com/NAntRunner. Upgrading to the new version is easy. Simply download the 0.4 ZIP, and extract it to the folder with your previous installation of NAntRunner (your existing settings will be preserved). There were three main things I wanted to tackle in this release. I ended up doing four.

First, I replaced the treeview controls with a user control that rolls everything up. This user control actually started out life in another one of my tools – IISTweak – and I had always intended to incorporate it into NAntRunner. When I did that I realized I had some more work to do to get it closer to being generic. I don’t think the control is completely baked yet, but it’s definitely closer.

For those of you who download the source code and want to tinker, I found an interesting quirk with Visual Studio 2008 and this update. The new user control is called NavigationTree, and it is part of the NAntRunner assembly (the executable). If you do anything with the control in the designer mode on the Main form, Studio updates the instantiation of this control in Main.Designer.vb file to read as follows:

Me.MainNavigationTree = New NAntRunner.NavigationTree

That, however, causes a compile-time error. If you remove the “NAntRunner” assembly reference so it reads as follows:

Me.MainNavigationTree = New NavigationTree

Then the compiler is happy again. Sigh. It’s the little things that cause the grey hairs.

The second update was to add a checkbox that will add the -verbose switch to the NAnt call. As you’d expect, the additional messages generated will appear in the NAntRunner progress window. In previous version of NAntRunner, you could include this yourself using the “Other NAnt Args” box, but having a checkbox feeds my lazy side.

Third, I added the ability to reorder scripts within a group and moving scripts from one group to another – all via drag and drop. I found myself having to scan down through the groups looking for the right script more and more recently, and really wanted to be able to move the more commonly used ones to the top of the list. In the past you could always hack the NAntRunnerSettings.xml file to manually reorder these, but this is much more intuitive. Reordering entire groups and their contents isn’t available at this time.

The one item I didn’t plan on but ended up doing was reworking how the standard output and error messages were retrieved from the spawned NAnt process. Previously it was done at the end, once the NAnt task had completed. Version 0.4 does this asynchronously via event handlers – as the messages are generated. The reasons why and the details of the implementation will be a subject for a future post.

The future

I started thinking about what would define the 1.0 release. My goal with NAntRunner has always been to provide easier access to the common features of NAnt. I think it’s almost there. I may extend the “verbose” checkbox into a “message level” control (perhaps a slider) so NAntRunner has native support for the -verbose, -debug, and -quiet switches. I’ve also thought about adding logging support (using the -logger switch). Finally, I may extend the interface to allow you to save the NAntRunner configuration settings used for a given build script so that the next time you open that script NAntRunner will configure itself accordingly (saving you a few clicks and keystrokes).

Minimally, I’ve dropped several TODOs in the code for things that I need to get back to. Since many of these involve error handling, these are a must for version 1.0. If you can’t rely on your tools then you need better tools.

Enjoy the new release, and let me know what you think.

September 4, 2009 Posted by | Tools and Toys | Comments Off on NAntRunner 0.4 Released

Organizational System Fail

The core of my organizational system for many years has been Microsoft OneNote (most of that experience is with OneNote 2003, although I spent a year with OneNote 2007).  In the last year I coupled that Microsoft Live Mesh, and the combination was awesome.  I could organize my day and week from home, then walk into the office and find those notes synched up with my work machine.

It was easy to use.  It was reliable.  I didn’t have to think about it – it just worked.

And then on Friday, it didn’t.

I booted up my machine to find two projects’-worth of notes completely gone.  Those two projects just happened to be my MAIN two projects currently – weeks of thoughts, ideas, questions and solutions: gone.

I actually found myself talking to my computer: “Oh no, you did NOT just do that.”  Sigh.  Oh yes, it did.  What a way to start a Friday.

I managed to recover a few of the points from memory, but as Professor Jones would say “I wrote them down…so that I wouldn’t HAVE to remember.”  More importantly, I resolved that this weekend I would solve the core issue.  Namely – the combination of OneNote and Live Mesh, or more specifically, how OneNote stores its data.

The core of my organizational system was four OneNote “notebooks”.  Each notebook contained one or more “pages” where each page contained the notes for a single project.  The notebooks were named “Today”, “This Week”, “Work” and “Personal”, and my typical day would begin by moving pages into “Today” so I would have the focus for the day.  As I completed them, I’d delete them.  If they were put on hold for some reason, I’d move them to “This Week” or one of the others.

Live Mesh, for its part, is configured to watch one specific folder on my machine – the folder where I keep my .one files.  If it sees one on my hard drive that’s more recent than the corresponding version in the cloud, it copies it up.  If it finds one in the cloud that’s more recent than the corresponding one on my hard drive, it copies it down.

Simple right?  Of course not.  If it were simple I wouldn’t be writing a blog post about it.

My machine at home is on 24/7.  That means that I can move a page from notebook A to B, and within a minute or so that move will be replicated to the cloud.  My machine at work, on the other hand, it usually off when I’m not there.  So, when I get into work in the morning, I boot up my machine and Live Mesh with it.  Live Mesh then takes a minute or two to look at the cloud copies of my .one files, and if it finds any updates it pulls them down.

OneNote manages a single .one file for each notebook.  As I moved a page from “Today” to “This Week”, for example, both of the corresponding .one files would be updated to reflect the move.  That means that if I move a page from notebook A to notebook B, and then go into work, boot up, and change B at work before Live Mesh has had a chance to bring down the update to B from home, that update will be lost.  The page won’t exist in A anymore (because I moved it out of there), and my work-update to B will overwrite the home-update to B (because the work-update is more recent).

I actually had this happen to me a couple of months ago.  The damage then was a relatively minor and short-lived project (“schedule dental appointment”, or something like that), so it was easily recovered from memory.  At the time I thought I learned my lessons:

1) When I come into the office in the morning, let Live Mesh boot up and synch up before trying to use OneNote locally.
2) When I am about to leave the office in the evening, let Live Mesh post my most recent updates to the cloud before shutting down.

As near as I can tell on Friday, I didn’t do anything with OneNote for the first 15 minutes my machine was running – I didn’t even have OneNote opened.

So, as I sat there is nearly stunned silence last Friday morning trying to will my notes back into existence, I resolved that I would do one of the following:

1) Find a way to store my notebook pages in separate files so simple moves wouldn’t cause entire projects to blink out of existence, or
2) Find another tool to keep my notes organized, or
3) Write a new tool that would meet my needs.

I did some digging into using OneNote and Live Mesh, and found many people raving about the combination, but nobody talking about any synching problems.  The closest I got was this post on Microsoft Connect.

The other tools (task managers) that I found were way too UI-heavy: too many buttons, options, colors, etc.  I just wanted something very subdued, easy, and flexible.  I don’t need or want to subdivide my projects into 500 discrete steps.  All I need is a text editor and the ability to indent – after all, that’s basically what I had been using OneNote for all these years. 

So, that left me with rolling my own.  What I ended up with was a simple desktop application that I call “Tasks”:

Tasks 

Tasks basically allows me to organize and manage a pile of text files – one per project – on the file system.  Tasks stores all of the text files in one folder on the file system, and that folder is synched between my two machines via Live Mesh.  Additionally, the task groups and there contents (Today, Tomorrow, etc.) are defined in an XML file, also in that same, synched folder.   The result is that I see the exact same view at home or at work.  Within Tasks I can create new tasks, move them from group to group, reorder them within a group, change the file name, open an editor to modify their contents, and delete them.

I haven’t decided if I’m going to release this as another tool or not (at the very least, it’s not ready for general release now).  This was something I crammed in over the weekend, so while it does what I need it to do, there are a lot of little things that were left out due to time constraints.  However, today was the first real day in action for Tasks, and it performed well. 

For me, the tool to manage tasks has become such a critical application that it needs to be ridiculously dependable.  Don’t get me wrong – this isn’t medical monitoring equipment or a spacecraft guidance system, it’s just a task manager.  But, it’s that tool that allows me to be as organized and efficient as I am.  If that tool isn’t easy to use, if it isn’t reliable, if I have think about it, then questions of “is my data safe” begins to creep in and sap that efficiency.  Get those critical tools – task managers, source control, data backup, whatever they are – and work to get them as automatic and as reliable as breathing.  The time and anxiety you save by doing that more than makes up for the investment.

August 24, 2009 Posted by | General, Tools and Toys | 2 Comments