17 July 2010

An Explanation

I posted a couple of days ago about a process we’re writing as a console application.  Since I want this to be the highest possible quality code, I believe I should defend/explain the reasons I did things the way I did. 

After that, there will be a couple of additional posts explaining some issues we encountered and how we corrected them.

So, on with the explanation…

As I see it there are two basic questions (feel free to send others, though).  1: Why a console app instead of a service?  2: Why chained streams?

Why A Console Application?

Because this is specifically requested as a temporary solution, and we were given an outer limit of days (45) that it might be needed, it didn’t make sense to do anything more complex than a Windows Service or Console application.  This already feeds into a BizTalk solution, so we could have gone that route, or we could have created a WCF service that would get called somewhere, but those seemed overly complex for what is, in essence, an automated version of “CTRL+C" “CTRL+V.” 

So, between a Windows Service and a Console app, they both have their advantages and disadvantages. 

A service hooked to a FileSystemWatcher could merge the files at the moment the daily file comes in.  Since the process that uses the merged file runs on a schedule, that could be a good thing, since we’d know that it was always ready to go when needed.  On the other hand, if it fails, we would have to specifically look at the file location to verify that, and if we had to re-run the process we either have to re-drop the file or find some other way to get the service to kick off.

A Console application, on the other hand, has to be scheduled through the Windows Scheduler if it’s going to run as an automated process, and that means balancing the needs of running the merge early enough to be ready for the secondary process, and late enough to be sure we’ve got the file.  On the other hand, if it fails, it’s easy enough to pull it up and run it manually or, even, pull up the solution in TFS and run it in debug mode (to catch what the error is).  And, since it’s scheduled, we can know exactly when to check to verify it ran properly.

In the end, it was decided it was kind of “six of one, half a dozen of the other,” and my Programmer’s Virtue of Laziness said that since I was going to have to write a console app during the debug phase anyway, I may as well just keep it as a console app.

Why Chained Streams?

Really, was there another choice?  I could have converted the streams to byte arrays, but that seemed overly complex for what we were trying to accomplish (turns out that was wrong, but I didn’t know that at the time).  We’re trying to do the equivalent of opening one file, hitting “CTRL+A” then “CTRL+C”, opening a second file, hitting “CTRL+END” then “CTRL+V”, and then saving the merged file to a folder for an automated FTP process to pick up.

Reading the files and writing directly to a new, merged file, seemed to fit that need quite nicely (and would have, too, except for something I found out later).

So, there you go, there was our reasoning.  Fairly simple and straight forward.  If you have other questions, please post them in the comments.  Maybe I’ll do another follow-up based on those.

No comments:

Post a Comment