Friday, December 02, 2005


I don't normally post to my blog when I am just referrencing another blog, but this was too funny not to pass along:

It's all about why SQL Server sucks.

Friday, November 18, 2005

To All the Blogs I've Loved Before

I noticed that a good many of you who read my blog also read Jake's...or at least you used to. Despite his best efforts to alter the entire known human population to the fact that his long awol blog is back online, in case any of you were left in the dark - check out Jake's blog. Jake - how we have missed you.

Also, in an earlier posting, I made reference to a colleage of mine. I did not realize that he had a blog (shame on me). So here is a link to Mike's blog, as well.

What a crazy bunch of guys I work with :)

Thursday, November 17, 2005

Who Should Pay for Development Tools?

I work for a consulting company, and a question was posed to me yesterday about the cost of purchasing something like Red Gate. This SQL diff tool will help me deploy changes in a faster, safer way to my client's production server. So should my consulting company invest in this tool, or should my client?

In the case of the SQL diff tool, in order for the tool to work best, I need to have access to the database server that holds both the production and the development databases so I can compare them. This tool needs to get installed on my client's database server, so they need to purchase it. Should my company also purchase this tool so we can use it on other projects?

What about tools like CodeSmith? CodeSmith helps my company deliver a lower-cost and higher-quality solution to my clients, since templates can spit out our DOM library code quickly and without human typo errors that hand-cranking the code would produce. So since my client is benefiting from the tool, should my client share in or take full responsibility for the cost of the purchase?

I think that when a client needs to install a tool on their environment, they have to pay for it. I think that when the client does not need to install the tool on their environment, my company should pay for it.

The reason I feel this way is because I see these tools as investments. Tools that help my company deliver higher quality solutions quicker than our competition can, could potentially cut down on the billable hours we spend on each project. While this may seem like a loss for our company to some, I see this as giving us an edge over our competition. Today, theoretically, we could deliver an identical solution to one we provided before we had this tool at a lower cost. This lower cost, since it was attained by removing some of the manual, labor-intensive busy work, should result in higher quality. Since we removed the manual labor piece, where we get bored and are prone to make errors, we now have a higher quality solution.

I feel that this edge we now have should result in two layers of benefit. On the one hand, we should be able to win more business - even if the business we win is at a lower cost. If we can do the same things as our competitor, but for less money, we should win the business.

On the other hand, the tools should enhance our ability to attract and retain both clients and consultants. If we deliver higher quality solutions, our clients will rave about us and bring us more business in the future, right? If we don't have to spend as much time deploying, troubleshooting syntax or synchronization errors, and maintaining inconsistent code we should have more time for our consultants to spend designing, creating, and learning.

I know I may have simplified the argument a bit, but I do feel that the small investment in a tool like Red Gate's or CodeSmith should pay off exponentially. I feel that the pay off can help our clients, but in doing so, help my company even more. So I think that if a consulting company wants to remain competitive, they should welcome tools that will take the busy work out of software engineering to open the door for their consultants to invest in being innovative instead of having to invest time in bug-fixing, manual database synchronization, or the writing of repetitive code.

Wednesday, November 16, 2005

SQL Diff Tool

I know it has been forever since I last blogged, and the slew of comments I have gotten in the past few months has deterred me from posting. Apparently, there are thousands of companies out there who think I have a "Great blog! Keep up the good work! By the way...please [get a new mortgage/enter our sweepstakes/buy our ultimate diet plan/etc]." Maybe I need to start paying someone to host my blog so I will keep out the riff-raff.

Anyway, Mike Hodnick recommended a database tool to me today that I am in the process of testing. It is very cool. I have not used other SQL Diff tools, so I am a first-timer. This thing rocks! I ran a diff on my production database and my development database. Not only did the tool tell me about the schema changes, allow me to choose which changes I wanted to resolve, and allow me to choose which database should get altered, it also brought some issues to my attention in the production environment of which I was unaware. For some reason, my foreign keys were missing. The tool generated a script, which I reviewed and then ran against my production database. I did have to clean up some data due to the missing keys, but now I have a quality production database that matches my development environment. I love it!

The tool is called SQL Compare and is sold by Red Gate. By itself, the compare tool cost $295, but you can download a 14-day trial. Check it out.

Wednesday, August 03, 2005

Technology Allowance

My company gives each member of the technical staff a bonus each year, thanks to our Raving Employees team, with which we must spend on some cool techno-gadgety stuff. The idea is to keep us loving and playing with technology to "feed our inner geek." I am having a hard time deciding what to do with my allowance this year. Here are some options:

1. Buy new headphones (I listen to music at work all the time, but do I really need to spend more than the $3.99 I spend on my current pair of ear buds that lasted 4 years?)
2. Buy a smart phone (T-Mobile kind of stinks in what they offer, but I have read that the Blackberry 7100T is okay.
3. Buy a wireless mouse. (This option doesn't eat up much of my budget, and who really cares if your mouse has a wire)
4. Buy a desktop for at home, so I can install all of the beta versions of software without harming my work environment. (But a desktop I buy today will be a dinosaur two years from now)

What do you think I should spend my money on?

Tuesday, August 02, 2005

Why We Hate HR?

The other morning my day started out with a friend of mine walking into my office and calling me a loser. I was offended and hurt by the unreasonable judgment he passed on me, so I asked him what he meant by it. He explained that he has been disappointed by my lack of posting to my blog. I had to concede. He is correct. I have been a huge loser. So, to redeem myself, I am posting today. Furthermore, my post today is in honor of my friend with the insight to call me a loser.

Last week I was having lunch with Neil - who remains blogless - and the JavaKid. I had been invited by management to participate, as a representative of the technical staff, in the interview process of a new HR Manager. I was honored to accept the invitation, but struggled a bit while compiling my list of possible interview questions that I like to prepare before any interview. I asked Neil and the JavaKid what they thought some good questions would be. Neil directed me to a very interesting article in Fast Company titled Why We Hate HR. Besides having an amusing title, the article raised some very good questions about the role of HR. To be honest, I had always seen HR as the people-friendly staff who help with any benefit questions. The article suggests that HR people of that caliber would better serve society as social works. Heh. This article suggests that HR should play a role of strategic planning, and should always back up any benefit recommendations with some hard, tangible business benefits that can be quantified and proved. Hmmm…metrics for HR! What a novel idea.

Wouldn’t it be cool if HR, instead of setting up a bunch of policy that we had to struggle to work around, used their time and energy to prove to my management team why it is worthwhile, in the language of business that my management team understands, for them to keep me happy and recruit talent?

Just a thought...

Now I have posted, and I no longer rate as a loser. At least that is what I keep telling myself :)

Wednesday, June 01, 2005

A User Story Without the User?

I have been making use of the concept of user stories on my current project. Although we have a functional specification document, we are still asking our users to write user stories for each module in our specification. I have found the user story to be a great tool for helping my team understand what it is that the user is trying to accomplish. The functional specification has some great detail, but the user story seems to always flush out small details prior to development that had we relied on the specification alone, we would have missed until testing.

The user stories we request from our client includes a short description of the functionality, a notes area for key points, and a list of client acceptance tests. This list of tests should be, from the user perspective, what the user expects to see happen before they consider each module to be complete. The exercise of asking the users to come up with a list of tests on their own has also helped us to identify change before we start development. This list of tests gives my team a quantifiable set of user expectations up front that helps us build our solution to meet their needs, rather than setting the user expectations to meet our solution.

When we first introduced user stories to our clients, we walked through the creation of a couple together. This helped our users get an idea of what might be included in the user story. Arguably, by doing the walk-through, we could have inadvertently shaded our user’s perspective. I still feel that the walk-through was a good way to get the users comfortable with what it is we wanted from them.

One snagging point that keeps bothering me about the user story in our project, is that we are not getting input from the end users. Our user stories are written by a business analyst who works for the client and the client project sponsor. While I much prefer a higher-level user to no user, I feel that the spirit of the user story is that the end-user is the one writing the story.

I have also heard of developers writing user stories when users are unwilling or unable to write the stories on their own. Again, I am sure that a developer-written story is better than no story. But is that really a user story? Or is this just another kind of documentation? I wonder if we aren’t missing the point of a user story when someone other than the end user is writing it.

Friday, May 27, 2005

SmartNavigation: Friend of Foe?

When studying for my certification exams, one thing I read about that seemed cool is SmartNavigaiton. I must confess that I had not used this feature at all in the past. For those who do not know about SmartNavigation, it is a page-level property you can set that is supposed to fix a number of web page post back woes. This easily set property is used to prevent the page flicker, and the loss of state related to where the user was on the form before the post back. This sounded like a great thing to try on a web form where there is a post back event linked to a text box TextChanged event. In my case, I am calculating a total cost based on the amounts entered in a few text boxes. Without SmartNavigation, when the page posted back, the focus was back to the top of the form. This is an annoying behavior to my users who had to enter a few different values that went into the total calculation. Each time they changed an amount field, they were popped back up to the top of the form.

SmartNavigation does take care of this issue, although if a user is in a text box that causes post back, and they tab off the field, the post back happens and SmartNavigation puts the focus back into the textbox they just left. This is better than moving them back to the top of the form. However, once I added a few more user controls and a style to my page, I ran into problems. I did not get a page error. Instead, IE reported a fatal error and my web browser closed. This happened to me repeatedly.

I followed my usual course of action when I run into crazy, unexpected behavior. I went to Google. I found a blog post by Karsten Samaschke dedicated to SmartNavigation, and why not to use it. In my situation, my style sheet may be the culprit. For now, I think I will turn SmartNavigation off.

And I was so excited about using something I learned while studying for the cert tests…

Wednesday, May 25, 2005

ASP.Net Custom Validators

The other day I came across a tidbit of information that once stumbled upon, I realized I had known this long ago. I wish I had remembered it before I had to debug code for an hour, so I thought in the interest of prosperity, I would document this tidbit right now.

In our web solution, we make use of user controls rather extensively. My client had the need for one of my common controls to be validated, but only in one instance of its implementation. I thought, "Hey! This is the perfect time to use a custom validator." I dropped on my custom validator, set the ControlToValidate to my user control, added the OnServerValidate event and voila. Except for one problem...I got an error when I tried to open the page. Apparently, a user control is not allowed to be the target of a custom validator. So what to do? As has happened in the past, and as I am sure will happen again in the future, I thought of a work around :)

I decided to set the target control to a text box I had on the same form. Oops. This text box did not require an entry. My custom validator only fired when there was text in the textbox. It took me a couple of seconds to recognize my mistake, as I tried to figure out why my validator worked sporadically.

Long story short (too late???), if you want to use a custom validator, the validation event will only fire if the control you are validating has an entry.

Tuesday, April 05, 2005

SQL Server 2005 Summit

I went to the SQL Server 2005 Summit Event here in Minneapolis last Thursday. I thought it might be worthwhile to mention my overall impressions as well as what I learned.

As with most Microsoft events, there was a healthy dose of marketing in the key note speech. A lot of time was spent on promoting the business intelligence that comes packaged with a SQL Server 2005 license. We were wowed with a 5 minute demo of how to take an Excel spreadsheet and turn it into a report that intelligently uses the data in the spreadsheet (althought the algorithm that was used to convert the data into usable information was all done and ready for the demo - which is the hard part). Microsoft seems to be promoting how SQL Server 2005, which includes SQL Server Reporting Services, will make life a lot easier and make IT budgets stretch much farther than ever before.

Onto the developer track...first off, I have to shamefully admit that I have not been keeping up on the Yukon beta releases. I was given an evaluation beta of SQL Server 2005 at the event, and look forward to installing it on my VPC and playing around. That being said, I was excited about some of the things I heard and can't wait to get my hands on the new tool.

The top 5 things that were of interest to me include:
  1. There is a new native SQL Server datatype called XML. This datatype allows for all XPath manipulations.
  2. You can now more elegantly embed error handling into stored procedures and user-defined functions. T-QSL now supports try/catch. Personally, I was never all that crazy about having to check @@Error.
  3. Enterprise Manager's user interface has been redone to mimic Visual This interface also combines Enterprise Manager and Query Analyzer into one. Yippy! No need to run them both at the same time (which I ALWAYS do).
  4. The Common Language Runtime has been incorporated into SQL Server 2005. You can now use C# to program stored procedures, user-defined funtions, aggregates, user-defined datatypes (this won't support TSQL anymore) and other comuputational-intensive operations. I think this is a cool addition, but I can already hear the arugments from some of my colleagues that business logic does not belong in the database.
  5. You can publish a web service directly out of SQL Server 2005. I am not sure how this works, but I am guessing it is similar to dropping a connection object onto a web form. You can do, but should you?

I will keep you posted as I learn more.

For another resource on what's new in SQL Server 2005, check out ASP Free's article on the new features..

Wednesday, March 30, 2005

Microsoft Certification Tests - Part 2

Okay, thank goodness for the free second chance Microsoft if offering on all certification tests. I have been putting off my tests. There seems to be so many other things I want to do on and for my projects other than study for cert tests. Anyways, due to Jake's break-neck speed through the tests, I thought I better get a move on.

Today I took, failed, re-took, and passed the 70-300 test. I was so bothered when I did not pass the first time that I had to immediately correct the mistake. Ordinarily, I would have waited to take this test until I had no doubt I could pass. To be honest, I had little doubt today. Anyway, my employer will pay for me to take each test once. I got the second one free, so I am free and clear. Thanks, Microsoft! :)

Wednesday, March 23, 2005

Visual Studio Web App Debugging

My current solution holds 32 projects. Many of the projects in my solution hold project references to other projects within the solution. The project references make XCopy deployment of our web app quick and easy. No problems here, right? Wrong! After many weeks of attempting to debug my web app, the debugger seemed to work in a very hit and miss fashion.

Here is the scenario...I have VS open. I make changed to a domain object class. I go to the Debug menu and select Run. My breakpoint are hit, and life is wonderful. Here is the catch. I stop debugging and make some more changes. I do to the Debug menu and select Start. This time, when my page opens in the browser, I get an error that the WebControls.dll can't be found. WTF?!? I stop debugging. I rebuild the solution outside of the debugger. Everything appears to be fine. I start the debugger again, only to see the same error. Okay, no need to panic. I will just close down VS and reopen (which becomes quite a lengthy chore if you are working over VPN and your solution gets the lastest version on load). That being done, I rebuild. No errors, everything looks fine. I start the debugger. By this point, about half of the time, the error is gone and things are fine. But the other half of the time, the error is STILL THERE!!! So what do I do? Occassionally I follow the theory that if I try the same thing over and over without changing anything, it is bound to work sooner or later, so I close down VS, reopen it, and give it another try. Eventually, I throw my hands up in the air and restart my machine. That always fixes the problem.

The thing is, it is difficult to meet estimate deadlines when I have to restart my machine at sporatic times throughout the day.

I am in no way suggesting that in my elevated state of frustration, it is impossible that I overlooked a simple and quick fix to my issue.

After being forcefully exposed to my rants, my co-worker, the JavaKid, finally asked me what was going on and how could he help. I told him that there was nothing he could do because the problem was that VS was the suckiest IDE in existance, end of story. Although the JavaKid loves to hear about the ways Microsoft can improve, he did ask me why I would choose to run the debugger in the way I was using it. He told me that there is a better way to debug a web app. So, to make a long story short (too late, huh?), he told me that it works much nicer with a web app if instead of selecting Debug/Start, you right click on the page you want to launch, View it in Browser (at which point I am fuming because I do NOT want to view it in browser, I want to DEBUG). Once the page is loaded in the browser, select Debug/Processes. In the list of processes, select aspnet_wp.exe, click Attach, and hit Okay. Now we are debugging! Since I have followed his advice, I have not seen that nasty error.

So the moral is: When building a web app, although the play button looks shiny and tempting, DO NOT PRESS IT! Attach to your running process instead. This tip has saved me many hours.

Also, if you tend to get the error about a resource cannot be copied because it is use by another process, try disabling your Indexing service and restarting IIS.

Monday, March 21, 2005

JavaScript - Bigger and Better?

I read an article today about the possible future of personal computer use, called
Goodbye, computer; hello, world!
. In a nutshell, this article talked about an idea that may or may not be underway right now at Google. The idea is that Google would create their own operating system and allow users to subscribe to their mega-computer that would hold all of their personal applications. Theoretically, this would make the need for storing data on a PC obsolete. You could now travel to Geneva without lugging around your laptop and just hop on a computer over there, login, and voila! All of your applications are at your fingertips.

Okay, that would probably work for some non-proprietary data needs. I might feel comfortable publishing out my Quicken data to Google, as long as my privacy was guaranteed. I am not so sure this would ever work for business-level data, however.

Then came the part of the article that made me squirm. In order for this throw-away-your-PC model to work, web applications would have to become quicker. I will just quote directly the line about the technologies that Google is looking to use to assist their team in creating web apps that are as fast as desktop applications. This new marriage of technologies is called Ajax.
Ajax, which is short for Asynchronous JavaScript + XML, combines JavaScript, dynamic HTML, and XMLHTTP to, in essence, let you build Web-based applications that run as quickly and seamlessly as local software.”

Maybe I am alone in this, but the idea of working with JavaScript as my programming language of choice is not a pleasant prospect. Actually, relying on any scripting language is something of which I would prefer to steer clear.

But it would be very cool to be able to write my applications for the web without having to be as considerate of loading and post back speed. And having an engine manage asynchronous requests from my web app to speed responsiveness would be very cool. So is it time for me to dig back out my JavaScript books that I have hoped to forget existed?

Friday, March 11, 2005

Visual Basic 6.0 Petition

If you have been reading my blog since the start, you will know that I come from a Visual Basic background. My first experiences delivering solutions to my business clients came in the form of VB 6.0 windows applications. I even occasionally opened the dreadful IDE that was Visual InterDev. Wow, did that ever suck. I thought it was pretty cool at the time, considering my other alternative of FrontPage and notepad if I wanted to develop on the Microsoft platform.

I also was very supportive of I couldn't wait to get my hands on the first beta of the .net framework.

I did have to deal with some pains when it came to the applications I built using VBScript or VB 6.0 once the .net Framework was around. There really is no way that I have found to easily port code from 6.0 to .net. I had to decide for each of my client application's if I thought it would be better to continue to support the application in 6.0 or if it would be better to re-write the app in .net. After working with .net for a couple of weeks, I never wanted to go back to 6.0. Whenever I had the chance, I upgraded my clients. I even estimated an upgrade at about a tenth of the actual cost and resolved myself to doing the rest of the work for free in my personal time just so that I would not have to use Visual Studio 6.0 any more.

That is why I have trouble understanding the purpose of this "Save VB 6.0" petition that has been ciruclating. I understand how there may be very large legacy systems built in older versions of Visual Basic, but come on. Microsoft has continued to support Visual Basic 6.0 for years since the .net release. For legacy code, wrap it, convert it, or leave it. Again, it is completely possible that I missing the whole point, so if I am, please enlighten me. But I do not understand why Microsoft should continue to support an out-dated language. I am sure there are applications written in Lotus 1-2-3 that are still running out there, too. Is there still support for Lotus 1-2-3? Gosh, I hope not. What a dreary life that would be if you were the support person.

So VB 6.0 is a thing of the past. As a Visual Basic language supporter, I ask, what is so wrong with that?

Wednesday, February 23, 2005

Ode to Trace

I have a project I have been working on wrapping up. We have gotten tons of user feedback, which thrills me to know that my tool will get used. On my list of items needing attention, however, one item loomed largely. I ignored the item for as long as I could. I cleared up everything else, and I had no choice but to stare my arch-nemesis directly in the face: performance. "Yikes", I murmured to myself, trembling. There were so many controls on my page, there were so many domain object getting loaded, the task of finding the offending performance hog seemed to be a huge undertaking. I fretted, I greived, and then I pulled myself out of my self-pity mode and came up with a plan.

From the good old days of .net Beta, when stepping through code without running into environment hang ups was only a distant fantasy, I remember my friend, Trace. So I set Trace=True in my aspx file, and went to town adding Trace.Write lines to my code. I began to get excited as I awaited my first trace output. I could envision the lines I would see, tellling me the expanded time that each snippet of my code began and ended execution. Lo and behold, when I saw the first Trace output, my troubles were over. I saw the offending control was something leftover from pre-version one of my form. This slow control was loading up over 50 times. I slashed the control from my code, and my page now loads about 60 times faster. Hooray! I have defeated the performance tyrant. So, in appreciation of Trace's beautiful output, I have decided to write an Ode...

Ode to Trace

Before I used you, the performance of my page was a disgrace
Rather than troubleshoot, I would have preferred an eye full of mace
As my hour of deployment drew near, I was in a time-race
When, in a flash of brillance (okay, that may be arguable) I turned on Trace.

Seriously, maybe everyone else in the developement community is using Trace all of the time. I had forgotten about it, and I am SO GLAD I remembered to use it today :)

Tuesday, February 15, 2005

To GAC or not to GAC, that is the question...

In a project with which I am involved, we have a .dll that holds common web controls. This .dll is shared by multiple web applications, however they all reside on the same server. Originally, when this .dll was shared by two web applications, we had both web projects in one solution and used project references. Then, when we deploy, we can just XCOPY the web applications’ files and bin folder. Viola.

Now we have some SharePoint sites that we want to share the common web controls. We can follow our previous architecture, and add the team sites to our solution and use project references. We can also create a new solution for the team sites, but add the existing controls project from SourceSafe to our new solution. In the latter scenario, we would have one master copy of the controls project in source safe.

However, I am not thrilled with the prospect of using XCOPY to create another bunch of instances of the controls .dll on the web server. I see an advantage that when one of our web applications goes live, we won’t have to worry about future changes to the controls .dll affecting our live web application. Having multiple copies of the same .dll on the same web server seems like a worse idea when I consider what we have to do through to address a bug fix or try to upgrade something, especially if we end up with different versions of the controls .dll in each web application’s bin folder.

I was noodling this issue and thought, “Hey, why not use the GAC? The end of .dll hell!”. All of the web applications will be on the same web server, and the GAC was created for just that purpose, right? I can still opt to have multiple versions of the controls .dll in the GAC if I so choose, or I can keep one version and have only one place to maintain it.

I have used the GAC in the past when dealing with a WinForms application, and after the initial set up, things went swell. Our team did create a utility to rebuild our GAC .dlls out of SourceSafe on demand, and we also agreed on a local structure where we published our debug .dlls so we could share each other’s projects without any headaches. We completely abandoned project references all together. There was some overhead, but I never had to restart VisualStudio because "a reference is being used by another process."

In my current situation, I am a little hesitant to use the GAC because I know that if one of the web applications gets moved to another web server, things will break. I have also read some articles where people have had trouble with strong naming and signing their assemblies. I know that changing course now will also require overhead to set it up so my teammates and I are able to work on the same projects without stepping on each other’s toes, and without requiring a manual copy of new versions of .dlls from SourceSafe to our local GACs. Putting our assemblies in the GAC will also make it difficult for another member of my company to quickly get my solution from SourceSafe and be able to compile and run my code (like our graphic designer). They would have to run an install or manually register the required .dlls to the GAC before their code would compile.

I am also still not crazy about the many versions of one .dll on the same web server, just asking to be out of synch with each other. And how will we manage which version is being used on each web application when we will keep one copy in SourceSafe?

I am undecided what to do. Do you have any experience, positive or negative, with using the GAC for web deployments? What would you do in this situation?

Monday, January 31, 2005

DateDiff in C#

This post will probably only matter to those of us who have VB roots, which I get no end of heckling from my co-workers about. But as I was working today, one of the things I had to do was calculate the span between two dates in months. Piece of cake, I thought to myself. All I have to do is use the DateDiff and specify months...except where is DateDiff in C#??? It is AWOL!

So I tried to see what else I could come up with. I did uncover this thing called System.Timespan. Apparently, Timespan will give you the difference between two DateTimes in something called 'ticks.' You can then take the ticks, and convert them to days, hours, minutes, seconds, and milliseconds. The drawback is that if you want the difference in years or months, you have to do some hoaky manual calculation that may or may not jibe with what the span really should come out to. In my research, this was the best example of how to duplicate the DateDiff in C#.

Thursday, January 27, 2005

Estimates...Need I Say More?

It’s been one of those projects. I was the one who estimated it, so how can I complain when it takes me the majority of my estimate just to figure out what it is that the specification documents is really trying to tell me to do? How can I push back that it is a change for me to be able to understand the bits and pieces of detail that don’t fit together in any logical fashion? The answer is I cannot. So that is why I have been not posting lately and instead have been working as hard as I can to meet my deadlines, even when it means extra hours that I do not bill to the project.

I have a friend, Avonelle, who is an independent. She has a very different way of dealing with estimates. Instead of giving an estimate to her client and tracking her actual hours versus her estimated time so she can bill hourly, she assigns a “value” to her projects, and bills the “value.” Although she creates an estimate to help her decide what the value should be, the actual number of hours she works is immaterial. She is encouraged to be efficient with her time since her “hourly rate” decreases with every additional hour she has to put into the project. It is still in her best interest to deliver quality solutions to her clients if she wants repeat business, so her client should not suffer for Avonelle’s efficient use of time or cutting of corners.

I wonder if a large consulting company could ever take this approach to billing clients…

Thursday, January 06, 2005

Microsoft Certification Tests

I am working on my MCSD.Net for C#. I have my MCSD for Visual Studio 6.0 and VB. I have been casually working on my certification for over 6 months. I got my first MCSD certification when I was fairly new to the field, and being certified seemed to be a very important marketing tool for a rookie. This time around, I am just having a hard time getting excited about studying and taking the tests.

I have taken and passed two tests so far. The first test I took was the 70-316 exam - Developing and Implementing Windows-based Applications with Microsoft Visual C# .NET and Microsoft Visual Studio .NET. The second test I took was the 70-320 exam - Developing XML Web Services and Server Components with Microsoft Visual C# and the Microsoft .NET Framework. Why I didn't take exam 70-315 immediately after exam 70-316, I will never know. The two exams overlap quite a bit, and now I am going to have to re-memorize all of the facts on the test.
Anyway, I passed the web service test and I have never written or consumed a production web service. I would like to. I played with web services in my studying, and felt that I did learn a lot while studying for that test. But do I know enough to be certified in web services? Maybe, I guess.

I had to study just as hard for the Windows test. I have been involved in many WinForms .Net projects in my career. I have been involved in an enterprise-level implementation of a WinForms application. In this implementation, we integrated with Office 2003. We also built a build helper to go out and crawl Visual Source Safe for us to make it easier for our three-man project team to coordinate efforts without stepping on each other's toes or having to wait for one another. It was a cool project.

So here is what I struggle with. I should have been able to quickly and easily pass a certification test on .Net WinForms with minimal studying. I should have had to pour over the Microsoft Web Services information in MSDN and done a painstaking number of hours of work to be up to snuff enough on web services in order to become certified.

But the truth of the matter is, I had to study an equal amount of time for each test. I also found each test to be very silly in the minute details that I was tested on rather than being tested on everyday uses of Visual Studio and on the far-reaching implications that your architectural decision could make. For example, there was not one question on best practices for organizing your classes in a project. There was not one question on best practices for using coding standards, commenting your code, or having project-wide standards for the user interface design. Not once was I asked about what I should do as a developer to prevent cast errors when I am trying to persist data to a database with a field type of smalldatetime. I was not asked when validation should be used, and why. I was not asked what are the best practices for creating a usable menu structure in my application.

I was asked questions about how to set up a connection string. How many of you out there know the syntax for a connection string by heart? All I have to do in the field to figure that one out is open up But I can't use google to show me how to organize my code in a manner that will make it the easiest to refactor and maintain. So why doesn't Microsoft test us on these things?

I guess my concern is that certifications mean very little if an inexperienced person who picks up a book and memorized a few facts has the same chance of passing the tests as someone who has been studying software engineering and implementing enterprise solutions, using Microsoft tools, for years.

Is it meaningful to be certified? Is it merely a marketing tool that we can use to make us look like more bona fide professionals when seeking out clients? And how could Microsoft build more meaningful questions into their exams so that skills other than syntax memorization are tested?

Wednesday, January 05, 2005

One to One Relationships

After many weeks, I am back with a question for you all. How do you feel is the best way to handle a one-to-zero or one relationship in a database? Let me explain what I mean.

First, I feel that there are two scenarios where a one-to-zero or one relationship applies. The first example represents two distinct objects, such as an employee and a computer, where a business rule states that one employee cannot have more than one computer, and one computer can only belong to one employee. I would probably argue that in time, this will turn out to be a false business rule, as when one employee leaves, the computer is assigned to another employee. Or when the computer dies, the employee gets another one. But for now, lets just pretend that the one-to-zero or one applies here. I think it makes sense for there to be an employee table and a computer table. Each table would have its own primary key, and lets just assume that they key is an auto-incremented identity. How should the tables be related? Should we add an EmployeeID field to the Computer table? Or should we add a ComputerID field to the Employee table? Here, it really shouldn't matter. We might choose to put the ComputerID in the Employee table because we feel that "employee has a computer" makes more sense than "computer has employee." Either way, it shouldn't really matter, though.

But how about in the second situation where we have a one-to-zero or one relationship to represent a supertype/subtype relationship? For example, we have a supertype of Individual with subtypes of Employee, Contact, and Owner. The Individual table has fields for the first name, middle name, last name, address, phone number, SSN, and there is an auto-incremented identity column which holds the IndividualID. Let's assume that the Employee table has fields for the employee number, title, related department, hire date, start date, and end date. The Contact and Owner table also have unique field in them. So how do we relate the Employee table to the Individual table? How do we represent the "Employee is an Individual" relationship in the database?

I have seen this handled in two ways. First, we could have the primary key of the employee table be the IndividualID. This is also a foreign key to the Individual table. So the IndividualID would exist as the primary key of four tables, the Individual table, the Employee table, the Contact table, and the Owner table. In the Individual table, Employee table, and Contact table, the IndividualID would also be a foreign key to the Individual table.

Or, we could have the primary key of the Employee table be an auto-incremented identity field called EmployeeID. We could then store this ID as a foreign key in the Individual table. So the Individual table would end up with three foreign keys, the EmployeeID, ContactID, and OwnerID. Each of these fields would allow null, and only one field should contain a non-null value.

Which solution is the better one?