Showing posts with label Software Testing. Show all posts
Showing posts with label Software Testing. Show all posts

Monday, October 18, 2004

Test This Site

A motif in my lectures is the lack of appreciation for testing and testers. In addition, especially in the era of outsourcing, there is simply not enough research done on testing. Since my lecture this week in the Software Engineering class is on testing, I was rummaging through my testing resources. A gem is this site, The Testing Standards Working Party, hosted by the British Computer Society's Special Interest Group in Software Testing. The site offers everything from a glossary to How To's to Papers on testing. Highly recommended!



Do you have any suggestions for testing resources? If so, add a comment mentioning them. Thanks and later!



Tuesday, October 5, 2004

We have met the enemy and they are us!

An article at msnbc.com states that software disasters are often people problems. The direct link to the MSNBC article can be found here. I was referred to the article by a posting in slashdot. Quoting the article:

Such disasters are often blamed on bad software, but the cause is rarely bad programming. As systems growm more complicated, failures instead have far less technical explanations: bad management, communication or training.



Although the article confuses what software is and is not (the SAP system is software, the application software added to it is not), the points it makes are still relevant. It refers to issues with requirements, stakeholder participation, operations, testing, end user training and inhomogeneity of technical understanding. Inhomogeneity of technical understanding is a multics term for a horrible project scenario which exists when managers do not understand software technology and developers do not understand the big picture, the context in which their software would run. Unfortunately this occurs all too frequently.



The article cites a National Institute of Standards and Technology study that software bugs cost $59.5 billon each year and a third of that can be attributed to inadequate testing. The pdf for the 300+ page report,"The Economic Impacts of Inadequate Infrastructure for Software Testing," can be downloaded here.



The title of this post was a quote from the comic strip, Pogo, penned by Walt Kelly which was an adaption of an Admiral Oliver Hazard Perry quote after a naval battle. You can find more information about the context of his quote here.



As always commenst are appreciated. Later!



Thursday, July 29, 2004

Invasion of the Genetically Altered Mutant Test Cases!

In my lectures on Software Testing I remark that there is not nearly enough research done on software testing. This summer, in my web based software engineering course we were discussing mutation testing and one of my students mentioned the potential application of Genetic Algorithms to mutation testing. After a bit of web searching I came across two articles that explore Genetic Algorithms for testing (pdf of article 1, pdf of article 2). Please share any comments you have on these articles or additional articles on this topic. Please also share any additional resources, pointers to research on software testing. Thanks.



In doing the research for this post, I came across yet another source for free educational material on the web. Samizdat Press has an eclectic collection of articles including a tutorial on LaTex and pointers to other free educational material sites. In the near future I hope to list most of these resources on my homepage.



My summer posting has been light. Hopefully you will see increasing frequency in the next few weeks. Later!



Sunday, March 28, 2004

More Testing in College!

Molly Campbell's entry on testing makes a great suggestion for how we can improve our Computer Science classes that emphasize coding. She asserts that testing is not emphasized in these classes and makes this observation: "Testing was something the professor did to grade your program, not something you had to focus on."



Molly is right on the mark with her observations. Software engineering practices should be used not only in industry but also in academics. Good software engineering practices should begin in school, if we have any hope of using them in history. Do you agreee? What other software engineering practices were ignored in your computer science education? Later!



Molly Campbell's entry:



As we discussed the importance of testing during lecture, I was reminded of the lack of project testing that occurred during my undergraduate studies. The absence of testing is ironic after reading how Brooks says that testing is the most important part of a project.



In school I experienced the opposite, spending almost all my time on coding. Some professors did stress the importance of planning, but I don’t ever remember anyone emphasizing testing. Granted our school projects were of a smaller magnitude where it was easier to get away with a lack of testing, but you’d think if half of your time should be spent on testing as Brooks suggests, that it would bare more weight in an academic setting.



School projects usually ended up being finished during crunch time so you were just so excited to get it done that you hardly tested it at all. Testing was something the professor did to grade your program, not something you had to focus on.



The importance of testing early and often could easily be stressed more even on the more simple projects done in school. Perhaps if it was emphasized there, it would transition to the workplace easier and we’d have more successful projects. It’s an interesting concept.



Now that I’m in the workplace, I realize that more people understand the importance of testing, but there are still similarities to my school experiences, especially when a project gets behind schedule. There is definitely still a crunch time and it seems like testing is what gets sacrificed. Even on my current project, it looks like regression testing is going to be cut in order to meet the schedule.



I think by testing early and often we will help make sure that more of our effort is spent on testing and hopefully we will benefit from getting early feedback. We should also remember the importance of testing when planning the schedule so we allot sufficient time for testing. Well tested programs will definitely provide the best results.



Sunday, March 21, 2004

Use Cases in Testing

This blog entry is from Pawel Wrobel, a current student in my CS540 class, Introduction to Quantitative Software Engineering at Stevens. It is a great illustration of how versatile use cases are. I hope you enjoy it. Later!



Use case modeling is not only a good way to collect and analyze functional requirements of a system but it also greatly facilitates testing. On my previous project our group was responsible for elicitation and validation of requirements for a new system. We decided to employ use case modeling. Preliminary requirements were obtained from our subject matter experts and use case model was created. Consequently the use case model was presented to the prospective users. We found out that users really liked working with the use cases, as they were easy to understand and they could play out different scenarios of their interaction with the system. After numerous iterations we arrived at our final use case model that we included in our requirements specifications and submitted to our contractors for further development. At the same time, based on the use case model, we began developing our test cases for formal acceptance testing.



Thursday, November 20, 2003

On Heisenbugs

A Heisenbug is a software error whose presence is affected by the act of producing it. It is a bug that occurs in fielded software but when you try to simulate it in the lab it is difficult to reproduce. Mitul Patel, a student in my online course had this experience recently. They thought they remedied the problem but they had a difficult time convincing their management and customers that it was fixed since it was difficult to reproduce.



Heiesnbugs are not that uncommon. Do you have any suggestions on how to deal with them or any personal experiences with Heisenbugs? It would be especially helpful if you have some strategy to test it and convince others that the bug has been swatted. Thanks for your input, later!



(Professor Bernstein's paper, “Software fault tolerance forestalls crashes: To err is human; toforgive is fault tolerant.” discusses Heisenbugs and can be found at my web site http://homepage.mac.com/vesonder in the CS540 notes.)



Monday, October 27, 2003

Stressed Out

In Professor Bernstein's paper on fault tolerance he discusses a classical engineering technique that involves stressing a system until it breaks and then certifying it for considerably less than the breaking point. I was wondering to what extent this technique is used in software testing and certification. How many of you have done this in any system that you have produced either academically or in industry? I estimate that less than half of the systems I have been involved with have: (1) been stressed and (2) certified for much less than the stress point. This is despite the fact that in most instances of systems that we stress tested, it avoided a major fiasco.



So the open question is how many of you stress test each system and, if so, do you certify the system, or make sure it operates within, parameters much lower than the stress point?



Sunday, October 19, 2003

New Testing Book

I am currently reading a new testing book, How to Break Software: A practical guide to testing, by James Whittaker, ISBN:0-201-79619-8, Addison-Wesley, 2003. This is the book I mentioned in the CS540 lecture class. It approaches testing as a series of attacks, similar in approach to how secure systems are challenged. It is a good addition to any testing program. He stresses ingenuity and flexibility in testing. Recommended!