Back when I started developing, my company followed the good old-fashioned waterfall method. We provided our customers with a rigorous requirements gathering phase, followed up by a design phase. Each phase resulted in a document detailing the findings, often in a numbered format so we could easily refer back to the requirement. There was sign off at the end of each phase, often followed up by a plethora of change orders as the next phase began.
When the design was solidified and approved by the client, the implementation phase was next. Our implementation phase was followed by deployment, and we were paid and the project was wrapped (theortetically) when the client gave us final sign off by agreeing to and walking through the user-acceptance tests to validate that each requirement was met and user interface was bug-free.
Nowadays I find myself working with agile approaches to software development, but I do still find value in client testing. I am not talking about handing the product over to the client and asking them to start using it. I am talking about creating some test scripts to help the users cover each logical path in the application, and building validation testing into the scripts. Since my company does not have a test team, I recently took a stab at writing a test script myself. How do the rest of you do this? Do you rely on the use cases that are created during design as your test scripts? Do you use an automated testing tool alone to test the user interface, or do you use an automated tool in conjunction with user-acceptance testing? Is user-acceptance testing out-dated?
Monday, November 01, 2004
Subscribe to:
Post Comments (Atom)
3 comments:
I suspect that very few individuals/people use automated test scripts, because:
1) The tools are expensive
2) After slapping down the cash to buy the tool, someone has to take the time to learn/understand it.
3) After learning it, someone must take the time to create the scripts for each application.
I'm not automated tools are bad - in fact, I think they are great. But I suspect their value is best applied to large development efforts where the applications are maintained many years.
There may also be sufficient value for medium-sized apps. However, I have never seen automated test tools applied to medium-sized applications (I'm not talking about tools for unit testing, but apps specifically design for similating end-user usage.) I suspects the reasons are as I outlined above - no one wants to invest the necessary effort involved.
I still rely on end users to test my applications. Some do a better job than others.
I have worked on a project where we used the Mercury test suite, but we also had a test team in from SWAT. They worked full time on test scripts for many months. I agree that you need a large budget to justify the cost.
We write our own regression suites that we run daily to make sure we haven't borked anything on a given platform. Basically, each engineer is responsible to do a full fetch, recompile and run the suite on the platform of their choice to see if we introduced any new bugs. Works well -- it catches quite a bit of stuff for us.
We don't bother with scripted testing since our product really doesn't work well with those (unless you know of a script that writes random source code that would compile most of the time AND can tell you what the expected results of said source code should be). ;-)
~Aaron [http://ramblings.aaronballman.com]
Post a Comment