Tuesday, December 21, 2004

Mind the Gap!

Mind the Gap is an enigmatic phrase in the London Underground ostensibly referring to the space between the platform and the train and cautioning travelers to be aware of it.  This post is contributed by Lee Moss who is in the Executive Master's in Technology Management program at the University of Pennsylvania.   It discusses the technique of Earned Value Management  which, among other things tracks the gaps between Budgeted Cost of Work Scheduled, Actual Cost of Work Scheduled and Budgeted Cost of Work Performed. 





Everybody talks about how hard it is to estimate the cost and schedule of a software system project, but nobody wants to do anything about it.  That's what I got from the reading this week, so I went off on my own to see if some of the Program Management Best Practices (Boeing and otherwise) could be applied.  The first best practice that I looked at was Earned Value Management.

The first place I went on the web was a Google search for "software engineering earned value management." I found a number of useful links.

http://www.niwotridge.com/Resources/DomainLinks/EarnedValue.htm
is a wealth of links.

http://www.stsc.hill.af.mil/crosstalk/1998/07/index.html is an excellent paper by Quentin Fleming
& Joel Koppelman titled "Earned Value Project Management: A
Powerful Tool for Software Projects." that provides very good insight to the EVMS novice, particularly on how EVM can be applied to a software project.  Editor's:  note:  I added the title of the article to the text and I also recommend the journal Crosstalk, from the US Department of Defense.  Subscription to it is free.  Check out the link.

http://www.stsc.hill.af.mil/crosstalk/2000/12/lipke.html
appears to be another interesting article where statistical process controls were implemented in a SW project.  I didn't read this article in any great detail, but I have the link to return to if I need this in my tool kit someday. 

http://www.stsc.hill.af.mil/crosstalk/1999/03/lipke.asp talks about applying Management Reserve to SW projects- something every PM should be interested in.  In this article, they say "A nightmare for software project managers is "extras" thrown at them by the customer. Of course, revised requirements are supposed to be renegotiated and reflected by a revised project baseline that includes a new completion date and changed cost. However, many times the "requirements creep" seems so trivial that project managers forego the perfect practice and merely adjust their funding reserve to account for the change. For many situations, the effort required to re-baseline the project and negotiate the change is far greater than the amount of reserve lost. As an internal practice, we
advise customers that changes are being accrued and that we reserve the right to negotiate them once it is apparent the effort to do so is worthwhile; however, until payment occurs for revised requirements, the reduction in funding reserve will be reflected in decreased TFA and thus
a lower cost ratio."  I'm not sure that I am comfortable with this particular approach, because it seems to encourage incremental creep that goes unaccounted.

http://www.stsc.hill.af.mil/crosstalk/2000/06/lipke.html is more on statistical process control and software engineering.

In general, unlike the weather, it seems that a lot has been done relating SW projects to other program management tools.  I'll look at some of the other best practices another time.





Wednesday, November 24, 2004

No Honor!

In most of my Software Engineering courses this time in the semester is devoted to Human Computer Interaction.  I mentioned in class this site that discusses poorly designed web pages.  Vincent Flanders hands out a daily award for really bad web sites and he is usually on the mark and he has been doing this since 1996.



Another source for web site critiques and advice is the book by Jakob Nielsen, Designing Web Usability: The practice of simplicity, New Riders Publishing, 2000, isbn:156205810X. 



Don Norman's books are always worthwhile and great reads.  Check out his web site here.    Note the web site url, www.jnd.org.  Those that have taken my courses should know what the abbreviation means, for those of you that do not it refers to just noticable differences.  Later!



Wednesday, November 17, 2004

Search Hints 2

PJM Interconnection, a utilities company (electric power) outside of Philadelphia is looking for software folks.   They are located at this web site.  I will continue to alert you to companies I know are hiring.  Soon there will be a web page associated with my homepage where you can browse the most recent entries.  (I realize that blog entries aren't the most efficient way to alert you of openings.)
Later.




Thursday, November 11, 2004

European Vacation


No, not the National Lampoon movie!  This post is a vacation from the predominant way developers view Object Oriented design and development, that is, not only as a design methodology but also as a vehicle for reuse.  After a slow start in generating reusable artifacts, the patterns movement has certainly provided a great deal of success in reuse and has greatly influenced, if not dominated,  the teaching of Object Oriented design and development.



In the European School of programming, also known as the Scandinavian School of programming design and development is considered to be modeling and simulating a real or imaginary part of the world through objects and classes. The focus was on developing this model, not on reuse.  You can find a description of the thought behind it hereKristen Nygaard is considered to be the founder of the Scandinavian School of programming.  His site is fascinating and it is particularly interesting how during his life he combined his work in Object Oriented research and computer science (which he preferred to call Informatics) with his interest in social issues.  An example of this unique combination can be found in this paper by Nygaard.



Nygaard along with Ole-Johan Dahl developed the first family of OO languages, Simula.  For those of you who are programming language mavens, beta is a more recent effort. 



There is much more to say about Nygaard, Dahl and the entire Scandinavian school of programming.  In fact I am considering it as a potential book (or at least long paper) project.  This will not be the last post on the school in this blog.  If you have other links, papers or experience with the Scandinavian/European school of Object Oriented programming and design, share them in a comment.  Later!



Wednesday, October 20, 2004

Search Hints

It is becoming increasingly difficult to find entry level jobs in today's marketplace. I have been asking around, especially about Silicon Alley, a New York City state of mind for entrepreneur's popular during the .com era. A friend of mine Doug Blewett gave me these leads: check out the "gigs" section here and check out this place to post your resume.



If anyone else has suggestions for sources of entry level jobs in software engineering, development or related fields, posting the information in a comment would be appreciated. There are a lot of fine fledling software engineers at Stevens and other schools in the New York/Philadelphia/Boston area that would appreciate their first break. I will continue to post leads as I uncover them. Thanks and later!



Monday, October 18, 2004

Test This Site

A motif in my lectures is the lack of appreciation for testing and testers. In addition, especially in the era of outsourcing, there is simply not enough research done on testing. Since my lecture this week in the Software Engineering class is on testing, I was rummaging through my testing resources. A gem is this site, The Testing Standards Working Party, hosted by the British Computer Society's Special Interest Group in Software Testing. The site offers everything from a glossary to How To's to Papers on testing. Highly recommended!



Do you have any suggestions for testing resources? If so, add a comment mentioning them. Thanks and later!



Wednesday, October 13, 2004

Q Quest

Striving for quality in products and process must be measured quantitatively so that you know you are making progress an must be attended to daily so that you do make progress. However quality does not end there, it begins there. Continuous improvement is not a slogan, a work principle or a technique, it is a life style. Striving for quality in all aspects of your life is something we all aspire to, although it is difficult and requires focus. However even after writing those sentences the words seem hollow and do not convey an appreciation of the quest for understanding and achieving quality.



So the best I can do is recommend an author, Robert Pirsig, and his books that do a great job of exploring quality. Zen and the Art of Motorcylce Maintenance was his first book and was popular when I began graduate school and still is today. A web version of Pirsig's book can be found here, but if you find you like the book, buy the paperback to support this kind of work. His second book, Lila: An Inquiry Into Morals, continues his exploration. Both books are written as novels and take some work, but at the end you will have a better appreciation for the quest.



Any other recommended Quality books out there or any opinions on the Pirsig books. I would appreciate your comments. Later!



Tuesday, October 12, 2004

Figure to Configure

One of the simplest and most valuable software engineering methodologies/processes to implement is software configuration. It pays off time and again when you need a convenient way to reverse a change made to software, backup an entire project (since it is all in one place) or to find a file relevant to a specific project. I have a cvs repository on my notebook and all of my software code and documentation are stashed there.



Unfortunately most do not appreciate the value of configuration management until they are burned badly. In the past five years I know of two sizeable software projects that had to do substantial rework and recoding because they did not use a software configuration mechanism. Several others had problems reassembling files of projects where employees left and did not have a software configuration mechanism. All of the projects that I manage must use a software configuration mechanism and I use it for my personal software. Our current favorite is cvs, an open source solution, but there are other popular commercial and open source options available. A great place to get the latest information on configuration management tools is CM crossroads. It provides information. pointers and advice on all the latest configuration management tools. In addition, if you use cvs, the pdf of Open Source Development with CVS, third edition by Fogel and Bar is here.



If you have experience with configuration management tools or resources or question about their use, share it with the community by adding a comment. Later!



Tuesday, October 5, 2004

We have met the enemy and they are us!

An article at msnbc.com states that software disasters are often people problems. The direct link to the MSNBC article can be found here. I was referred to the article by a posting in slashdot. Quoting the article:

Such disasters are often blamed on bad software, but the cause is rarely bad programming. As systems growm more complicated, failures instead have far less technical explanations: bad management, communication or training.



Although the article confuses what software is and is not (the SAP system is software, the application software added to it is not), the points it makes are still relevant. It refers to issues with requirements, stakeholder participation, operations, testing, end user training and inhomogeneity of technical understanding. Inhomogeneity of technical understanding is a multics term for a horrible project scenario which exists when managers do not understand software technology and developers do not understand the big picture, the context in which their software would run. Unfortunately this occurs all too frequently.



The article cites a National Institute of Standards and Technology study that software bugs cost $59.5 billon each year and a third of that can be attributed to inadequate testing. The pdf for the 300+ page report,"The Economic Impacts of Inadequate Infrastructure for Software Testing," can be downloaded here.



The title of this post was a quote from the comic strip, Pogo, penned by Walt Kelly which was an adaption of an Admiral Oliver Hazard Perry quote after a naval battle. You can find more information about the context of his quote here.



As always commenst are appreciated. Later!



Monday, September 27, 2004

Useful Use Cases

Requirements elicitation and representation is always difficult, especially when the results need to be understood by two potentially diverse populations: the stakeholders (users, management, customers, ...) and the architects, designers and developers of the system. Use cases, emphasizing scenarios, are a great middle ground for representing large hunks of the requirements in a style that is both understandable to the stakeholders and specific enough for the technical community charged with architecting, designing and implementing them.



Use cases were introduced by Ivar Jacobson in his book, Object-Oriented Software Engineering: A Use Case Driven Approach, ISBN:0201544350. However, if you are not ready to purchase the book for $60, a great free resource on Use Cases is Rebecca Wirfs-Brock's OOPSLA 2002 tutorial, The Art of Writing Use Cases. You can download a copy of it here. Wirfs-Brock's site is an excellent resource for many OO requirements, design and architecture topics. Definitely worth browsing.



If any one has had experience using Use Cases or knows of further resources, I would appreciate it if you would add a comment. Later.



Monday, September 20, 2004

Reliable Advice

Attached is a description of the new edition of John Musa's book, Software Reliabilitiy Engineering: More Reliable Software Faster and Cheaper. Thanks to Professor Bernstein for forwarding John's notice to me. Software reliability is one of the key "ilities" (along with usability, enhance-ability, ...) of interest when developing a software architecture. John's book has been a classic in this area and provides the reader with tons of practical advice and even free software to deal with software reliability issues.



Has anyone else used or read Musa's book? I would be interested in your comments. Later!



Musa's description of the book:



The new edition of "Software Reliability Engineering: More Reliable Software Faster and Cheaper," now available, focuses on making software practitioners more competitive without working longer. This is a necessity in a world of globalization and outsourcing where professionals want more time for their personal lives.



John D. Musa has written what is essentially a new book. It reflects the latest software reliability engineering (SRE) practice. Reorganized and enlarged 50% to 630 pages, the material was polished by thousands of practitioners in the author's classes at a wide variety of companies worldwide.



One of the book's new features is a series of workshops for applying each area that you learn to your project. The frequently asked questions (answered, of course) were doubled to more than 700. All the popular features of the previous edition have been updated and rewritten. These include the step-by-step process summary, the glossary, the background sections, and the exercises. The user manual for the software reliability estimation program CASRE, downloadable at no charge from the Internet, reflects the latest version. The list of published articles by SRE users of their experiences now numbers more than 65. Everything is exhaustively indexed to make the most detailed topic easily accessible to those using it as a deskside reference.



The book separates basic practice from special situations for faster learning. Musa presents the material in a casual, readable style, with mathematics placed in separate background sections. All this was done to make the book especially effective for self learning. It also furnishes everything you need to implement SRE in your organization, even discussing the most effective methods of how to persuade people to adopt the practice.



One of the first Print on Demand (POD) professional books, it is coupled with a web site where you can browse and order the book. The web site has a complete detailed Table of Contents and extensive samples from the book that simulate the bookstore browsing experience. POD uses the latest automated technology to custom print each order and ship it anywhere in the world. It is as fast as you can obtain a traditionally published professional book. The cost is similar. POD technology makes it economic to keep the book in print for as long as even a handful of people want it.



Musa is one of the founders of the field of software reliability engineering. He is an IEEE Fellow and is Engineer of the Year in 2004. Listed in Who's Who in America since 1990, Musa is a prolific researcher, international consultant and teacher, and experienced and practical software developer and manager. ACM Software Engineering Notes noted for the first edition, "The author's experience in reliability engineering is apparent and his expertise is infused in the text."



The book, published by AuthorHouse, comes in hard cover and paperback editions. It contains 630 pages, including prefaces and appendices.



Tuesday, September 7, 2004

First and Second Contact

I often provide pointers to free resources on the web. In one of my latest wanderings I discovered a treasure of information on the NATO conference that literally began software engineering! It has the pdf's of the first and second NATO conferences on software engineering in 1968 and 1969 -- the documents that began software engineering. You can find this remarkable collection here.



Thanks to Brian Randell (one of the original editors) and the University of Newcastle Upon Tyne for scanning and hosting these documents. Brian also provides a retrospective and some incredible pictures of the staff and luminaries that attended the conference. Future posts will delve into these documents.



Later.



Friday, September 3, 2004

Software Infrastructure

In the past few years I have a consistent analogy for the changing perception of software in corporations. Software has gone from being considered as the paint on the walls of the building, changed every few years, to the building itself, part of the long term assests of the company. This changes the approach to current software from a philosophy of replacement to a philosophy of maintenance. It also should change the approach of building software from one of knee jerk response to today's need to careful planning, architecting and designing of an entity that will be around for decades.



The rhetoric of the software industry has changed to match the perception. For instance, we now talk of trustworthy software. However I do not believe the practice of software matches the rhetoric. Others agree. Fernando Pereira has referred me to an article by Dan Bricklin titled, "Software that lasts 200 years." In the article Bricklin discusses the changes necessary for prepackaged software to meet these longevity aspirations. I highly recommend the article and would be interested in your opinions both of the article and whether you feel that the software industry and the software engineering community are addressing these challenges. Later!



A side note - Fernando Pereira, has a terrific web log called Fresh Tracks where he discusses an amazing array of topics including skiing, books, science and music. Highly recommended. Fernando is chairperson of the Computer and Information Science Department of the University of Pennsylvania.



Thursday, August 19, 2004

Measure twice, cut once!

Dennis Lowary's Log Book entry presents a case study of programming in the small (one person) on a short duration project. He tried to use some software estimation and requirements techniques but abandoned them early, because he felt it was too much overhead for the project. In the end, he felt the project did not turn out quite like he intended and wondered if he too hastily abandoned a software engineering discipline.



There is an old carpentry saying, "Measure twice, cut once!" Meticulous attention to upfront detail makes the project go easier. What carpenters knew centuries ago is great advice for us, but does current software engineering practice "scale down" to small projects or do we need to revise them significantly? What do you think or, better yet, what have you experienced? One thing is sure, we need more case studies and data to understand what does and does not work! Later!





Dennis Lowary's Log Book entry:



My entry is about thoughts I had while working on a recent project.



Courses like this focus on projects of long duration (months and years of work for a group of people). Can the theories discussed be applied to situations that are significantly smaller in magnitude? What should/could be done under the following conditions:



You’re told: “Create a database-driven website of some useful functionality”



You have:



* 14 days starting now



* A single-person staff (yourself)



* Very limited, or no experience with the languages you must use



* No advanced tools



With such a short time frame, is it possible to apply anything discussed in this course such that it is a

benefit? This was something I considered recently while working on this project. After choosing an overall function for the site, I attempted to create a set of requirements. Part of the deal was I had to state specific aspects of the intended functionality and once stated they had to be completed. I needed to not make it too simplistic, but a big concern was not to say I would do something that I couldn’t. I don’t have an enormous amount of experience with coding, but I have enough to know that something I might think simple on paper may be unforeseeably difficult to implement.



I considered trying to create some sort of ranking system for the possible features so that I could decide which ones to commit to. After doing that for about 5 minutes, I thought “What am I doing? I can’t worry about this that much; there isn’t enough time.” So I ended up putting very little thought into what it might take to implement each one and just picked some that seemed easy enough to be attainable. My hasty rough estimates were not good. I had a very hard time getting everything working on time, and what did work was either ugly to look at, ugly from a logical point of view, very not user-friendly, or a mixture of those things. I have to think that not doing any (in-depth) type of metrics or estimation that I saved time that I (apparently) needed to get the thing to the barely usable state that I did. On the other hand, I wonder if I spent more time on estimation, could it have possibly come out better. I certainly understand that it would have with a large scale version (larger staff and time frame) but with such limited time and resources is it possible at all?



Wednesday, August 18, 2004

Can Open Source become too popular?

In his Log Book entry Lucas Vickers discusses his experience with Open Source support and contrasts it with his experience with Microsoft support. He also poses an interesting question at the end, thqat as more folks use Open Source the expertise level will go down and more Open Source developers will be bombarded with newbie questions. Do you think this will happen? If so, how can Open Source developers handle the support demand?



As an added bonus in his Log Book, Lucas gives us a pointer to another rule base language, Drools. In a later blog entry I will give you my impressions of the Drools tool. Later!



Lucas Vicker's Log Book Entry:



Open source is the topic of this week’s discussion, and it is something that I am fairly interested in. I have worked with open source products, and I have also worked with Microsoft products. I know using Microsoft as the model for what comes out of corporate development is a little unfair; however I am going to do so because I have the most extensive experience with them. To be honest I have found Microsoft products that work really well, and I have found open source products that work really well. I once developed a program for the pocketPC platform using Microsoft tools, and oh my that was one of the buggiest platforms I have ever worked with (this was back in 2002). I have also recently tried to integrate Drools, an open source and java based rules engine. After a few weeks of heavy coding with Drools I found that there was a major flaw in the implementation which halted my work. What I am trying to say is that each product has its flaws. The difference is as soon as I had a problem with drools, I sent off an email to the developer, met him on IRC on #drools, and we resolved the problem with in a few days. Any problem I had with Microsoft all I could do was search MSDN and pray that they have an answer to my question. I think that one major factor to consider is the group of developers behind a project. Open Source developers really love what they do, work in their spare time because it’s a passion, and are very easy to contact. Microsoft developers, however, hide behind the veil that is Microsoft and it is near impossible to get an explanation for why a function works a certain way. I think knowing that OS developers are developing because they want to and love to, and are accessible, gives open source software an edge. I wonder if this edge would disappear once the average user began attempting to use OS software and the developers became flooded with repetitive and useless questions.



Sunday, August 15, 2004

Catch-22

The following entry from Kaylon Daniels begins a series of entries from my summer software engineering class. It seems that every semester, outsourcing becomes a larger issue with the graduating generation of computer scientists. Kaylon's entry discusses the effect it is having in this country and speculates on the long term effects. A later entry in this series will discuss some other aspects of outsourcing and its effect on quality. I would appreciate any comments on this post or your views on outsourcing, its effect and potential opportunites outsourcing may provide.



Here's Kaylon's Logbook entry.



Most of my log book entries have been relatively short. But this one will be one of my longer posts. I just wanted to write some of my thoughts and discoveries about outsourcing. This is an important topic to me because I am unemployed at the moment and have been wondering how much of the credit I can assign my employment status to this phenomenon. I’ve read some articles in Business 2.0 and Business Weekly and I’m unsure which opinion to believe. Biz 2.0 states that outsourcing is going to have long term positive impacts on the U.S. economy. The middle class in other countries can afford to purchase our goods; this will produce more jobs in the U.S. They also feel that it frees Americans to pursue more advanced technologies while low level maintenance work is passed on to the third world. Now, Business Weekly states that outsourcing has nothing to do with the low level of tech jobs available. They state that the U.S. worker has been undone by his/her own productivity. Because the American worker can do more with less, there is less of a need for skilled workers. Maybe they are both correct to some extent, but I am sure of one fact. The last job fair I attended, I waited in a line that went around an entire city block. I couldn’t believe what I was seeing. This country is beginning to resemble Brazil more every month. Outsourcing is just another blow to the middle class which is already starting to disappear. Personally, I haven’t seen another tech industry pop up to replace dot-com businesses. There has been talk about nanotech and commercial space goods, but how can we capitalize on these markets when very few Americans are actually studying sciences. Most students in these fields are from nations we are currently outsourcing to. These emerging countries should be able to produce enough capital (thanks to a growing middle class) to research these new tech fields; why would one assume that the U.S. will have any real influence on the next generation of technology. It seems to be a catch-22 situation.



End of logbook entry. Before I end this post, a brief interlude on the phrase Catch-22, which defines a can't win situation. Catch-22 was written by Joseph Heller and Alan Arkin starred in the movie in 1970. It concerned World War II bomber pilots. Orr was pilot who was trying to get grounded by reason of insanity. The defining moment in the book is captured in this quote:



There was only one catch and that was Catch-22, which specified that a concern for one's safety in the face of dangers that were real and immediate was the process of a rational mind. Orr was crazy and could be grounded. All he had to do was ask; and as soon as he did, he would no longer be crazy and would have to fly more missions. Orr would be crazy to fly more missions and sane if he didn't, but if he was sane he had to fly them. If he flew them he was crazy and didn't have to; but if he didn't want to he was sane and had to. Yossarian was moved very deeply by the absolute simplicity of this clause of Catch-22 and let out a respectful whistle.
"That's some catch, that Catch-22," he observed.
"It's the best there is," Doc Daneeka agreed.


If you would like to discover more about Catch-22, check out this site, where I refamilliarized myself with the book. It is a great book that was very popular in courses when I was in college in the late 60's, early 70's.



Later.



Thursday, July 29, 2004

Invasion of the Genetically Altered Mutant Test Cases!

In my lectures on Software Testing I remark that there is not nearly enough research done on software testing. This summer, in my web based software engineering course we were discussing mutation testing and one of my students mentioned the potential application of Genetic Algorithms to mutation testing. After a bit of web searching I came across two articles that explore Genetic Algorithms for testing (pdf of article 1, pdf of article 2). Please share any comments you have on these articles or additional articles on this topic. Please also share any additional resources, pointers to research on software testing. Thanks.



In doing the research for this post, I came across yet another source for free educational material on the web. Samizdat Press has an eclectic collection of articles including a tutorial on LaTex and pointers to other free educational material sites. In the near future I hope to list most of these resources on my homepage.



My summer posting has been light. Hopefully you will see increasing frequency in the next few weeks. Later!



Thursday, June 24, 2004

Estimation is the point

Estimating the effort necessary for a software project is always difficult but is always necessary. As the project progresses your estimate should converge on what is actually occurring in the project. However, often initial estimates are needed immediately after the project is conceived, to determine whether it is affordable/doable. At that early point you have to draw on your experience and analogy to produce an estimate, noting places where the dragons (unknowns/traps) are.



If you do have more time or after those initial "back of the envelope" estimates, there are tools that can help you estimate the effort. NASA has a great site on software estimation that includes an informative, downloadable handbook on estimation techniques and it can be found here.



Function points are one of the more popular software estimation technologies and Stevens teaches the methodology. IFPUG (International Function Point Users Group) is the society that promotes function points and its web site can be found here. However in order to obtain the full benefits of the organization, you must join it and there is a membership fee of $50 for students and $185 for professionals (plus an additional $75 first year fee).



Any experiences/data on software estimation techniques that you could provide in comments to this post would be appreciated and useful, Thanks!



Finally my web site is undergoing fairly significant upgrading of information. In particular I am creating resource pages for software engineering, rule-based programming and languages. If you have any additions to those pages send me some mail. Hope you are enjoying your summer. Later!



Thursday, June 17, 2004

Creative Engineering

An ongoing discussion in my Software Engineering classes is what is engineering and what is science. Often students will express the opinion that science is creative and engineering follows rules. Most engineers and scientists would contest such strong distinctions. For example, the University of Oxford has this remark in their Engineering Science description, "The qualities of a good engineer include not only a high degree of technical competence but also imagination, strength of purpose, commonsense - and a social conscience." This is a terrific description.



I would be interested in your opinion. What is your definition of Engineering Science? Is it the science of: design, build, test? What differentiates science and engineering? I realize these are cosmic questions but the discussion arises in almost every class I teach in Software Engineering in the Computer Science Department, so it is clear that many are wresling with these concepts and any insight on the topic would be appreciated.



Thanks and later!



Monday, June 14, 2004

Rules Redux

In the halcyon days of expert systems (1980's and early 90's), rule based programming was very popular since it was a straightforward way of representing user knowledge. Rule base programming environments also provide the opportunity to do data driven programming -- a technique that deserves to be used much more than it is currently used. Since I think rules are under represented in current development -- I will periodically discuss it in this blog hoping to at least increase awareness of it. I also am creating a section on my web page that will support folks interested in rule based programming. I would be interested in comments from anyone who is considering or is use the rule base technique.



If you want to get involved in rule based programming one of the best ways to get started is to use CLIPS. CLIPS is a rule based environment that originated at NASA and is quite good. It also is free and you can get information on how to download it and the excellent supporting documentation here. You can do quite a bit with CLIPS it is very impressive and highly recommended.



If you are interested in sampling the latest in rule based technology, then I suggest Production Systems Technologies and their RETE-II technology. Charles Forgy is the Chief Scientist of PST and the inventor of the RETE algorithm (used in CLIPS and many other rule based environments. Quite simply RETE made rule based technology commercially viable. I have known Charles Forgy for decades and have used everything from OPS4 to his newest stuff and it is great. PST's environments are commercial products, they are not free but they are worth it if you need to build high performance products. (BTW I am not involved with PST in any way other than using and liking their products and knowing Charles (Lanny) Forgy.)



I will have more on rule based systems and data driven programming in the next few months. Check here and on my web page for further information. Later!



Monday, May 24, 2004

Out Sourced

One of the more controversial topics in my Software Engineering classes is outsourcing. Ankita Parikh was the author of this entry and makes some great points -- My comment in Ankita's logbook was - " Outsourcing is a difficult issue and as you correctly stated it has been happening for centuries. The response for us has to be is there an opportunity here? Even with outsourcing folks, still are not satisfied with the state of software development and maintenance – outsourcing does not cure all the issues, it mainly reduces the cost."



Ankita's entry and my response reminded me of an article by Herbert A. Simon, Nobel Prize winner in Economics, titled, "Prometheus or Pandora: The influence of automation on society." It was in the November 1981 issue of IEEE Computer. Although not spot on the topic he does discuss how automation affects employment and it is a great read. Unfortunately, it is in no electronic archive, so you are going to have to walk to the library to get a copy. Some gems from the article: "... the level of employment in a society has nothing to do with that society's level of productivity"... "In a very obvious way a society is capable of making use of the human resources it displaces by increasing productivity." ... "We have seen no large disturbances in our society over the last 35 years that were caused by worker displacement through automation."



Since I believe that outsourcing is a technological twist (caused in part by cheap, high quality communication), the article does bear on the current issue. The key is to see what opportunities outsourcing provides. Concentrate on improving the lot of software not simply making it cheaper to produce. Making software that is more reliable, flexible, human engineered, ... the software industry has lots to improve on that adds value and that outsourcing wouldn't fix.



So read the rest of this post and read Simon's article and think about the opportunities. It would be great if you would add comments to this post. I am sure this will not be our last post on outsourcing! Later.



“Exporting America”



I just read an article in TIME about “Exporting America” and the concern it rises. Is outsourcing going to be taking away more jobs than it already has? According to article, by the end of the year 588, 000 jobs will have left the country and 1.6 million by 2010. Should we all be worried? I am really not sure how I feel about this current uproar. I guess because I know I will have a job once I graduate. But I am a little worried that if more and more jobs in Information Technology are being sent overseas; I may be at risk of losing my job in another 3 – 4 years. Many of the big power houses don’t see a problem with outsourcing because for them it’s saving money. It’s all about making money huh? It tough for me to take sides for the reason that I understand both point of views. I have never been a selfish person and I won’t be starting now. I won’t deny it’s a problem nonetheless. It’s ironic that outsourcing has become such a big issue recently, when we’ve had outsourcing since the start of our country. Everything from clothing to electronics to cars has been coming from other countries. So why the big uproar now? It’s finally hit our professional job market. Before the jobs that were being sent off shore were labor job – manufacturing. Now, it’s more alarming because we may actually lose the jobs that we spent the last four years preparing for.



Monday, May 10, 2004

On Performance

After an inter semester hiatus, this blog should show activity again. This post is again from one of the logbook entries of a student last semester, Aagmik Parikh, and is being posted with his permission. It deals with computational and theatrical performance (see my slashed. "//" comments at the end of the posting). I offer it because there does not seem to be enough attention given to performance in the early stages of a project yet there are many instances where we still hit the wall on performance.



Even when performance is considered, the assumption is that the user will upgrade to match the needs of the software. For esoteric applications that is reasonable but it should be the exception not the rule. An extreme example of this cavalier approach to cycles is the rumored requirements of Microsoft's next generation operating system codenamed "Longhorn." According to slashdot,

Longhorn will require dual 4-6Ghz processors, 2 Gigabytes of RAM, new graphics processors, a Terabyte of disk and an extremely fast network. Be prepared to upgrade even your newest PC in a few years if you would like to keep up to the latest Microsoft operating system. I hope the rumor turns out to be unfounded. We need to stop the disposable computer attitude, regardless of the price drops. New PCs cause lost productivity during the transition (not to mention the costs of transition), the old PCs choke landfills and expenditure for supporting software tax corporate budgets. At the very least we should all petition for upgradeable machines! Flame off! Enjoy Aagmik's excellent and entertaining post. Later!





A true Masterpiece .....Lord of the Rings…….making of Gollum



This week we were shown a video of the making of the Lord of the Rings, emphasizing creation of the character Gollum. There are no doubts that the digital team involved broke all barriers in CG, but the creation of this character was really interesting. Initially they had implemented a methodology called key frame animation for creation of this character. I don’t want to go to the detail of the working of this method , but in short it revolves around the making an initial frame based on a model of the character and then working on the frame to produce motion. But with the talent of the actor Andy Serkis (voice of Gollum), and the digital team they redesigned the whole project to produce a new version , which became a spectacular hit. In their new approach they used Andy’s motion capture (another method used in animation) to produce Gollum’s movements. Hence their new approach used key frame animation along with motion capture. This concept was implemented for the first time and it became an instant success.



Analyzing their work, the first think that comes to mind is….anything for performance!!. In fact I am sure their animation success is going to be a benchmark for others. They redesigned their whole approach to strive ultimate performance. It can be also related to prototyping and throw away prototyping. Their initial model lacked the energy and performance present in their later models. I also think that this is a classic example of Business Process Redesign (BPR) as they redesigned the whole process, making changes in their approach.



//I do not know if it was Business Process Redesign as much as it was technical process redesign. They really did do a great job. It is interesting that you mention performance since performance in this instance takes on two meanings: the theatrical performance which was enhanced by th emotion capture and the skill of the actor and the animation performance/speed which was enhanced by their technology. When we build systems we do not pay enough attention to either brand of performance --- our systems are in front of an audience “our users” all the time.



Wednesday, April 28, 2004

Your Presence is Requested

A good friend of mine, Steve Crandall, is doing a survey on how college students communicate and presence -- the ability to detect whether are users are online or available (source: webopedia). He would like you to complete this really short survey. You can access the survey here.



Steve is part of a company, Omenti Research, that specializes in usercentered technology. They are a great group of folks. Steve also keeps a blog on his perspectives on science and communication.



I hope you do help Steve and participate in the survey. Some software engineering and artificial intelligence posts coming real soon. Good luck with finals and papers for those of you struggling with the end of the term! Later!



Monday, April 19, 2004

Book Learning

On Friday I gave a talk at Hobart and William Smith Colleges. It is a revision of the earlier talk I gave at Drexel and can be accessed either at my home page or here. If you are interested, one of the faculty members at Hobart and William Smith, Professor Eck, has written a great introductory text on java that is available on the internet for free! You can get it here.



Speaking of free books, I downloaded this intro calculus book by H. Jerome Keisler of the University of Wisconsin and was very impressed. A post on slashdot recommended it.



One book that is not free but is the introductory and reference text for Artificial Intelligence is Stuart Russell and Peter Norvig's, Artificial intelligence: A modern approach, Prentice Hall, second edition, ISBN 0-13-790395-2. I plan to use it in a future course and it would be a great book for anyone wanting to understand the current issues in AI.



In a later post I will survey some of the more general places on the web for free downloadable texts. I would appreciate any suggestions of worthwhile downloadable texts or comments on the texts I suggested. Later!



Thursday, April 8, 2004

Dissonant Music

On almost every college campus and most place where music lovers congregate the issue of music piracy on the web is being discussed. Illegal distribution of music on the web has cost the music industry billions. There are numerous sides to this issue. Sean Brady in his post provides his provocative, well-researched view.



I am sure many of you have comments and views on this one. Later!



Sean Brady's post:



In Lecture 4, Professor Vesonder discussed with us many ways to ruin a project, including pushing a new technology to market too quickly. One of the sub-issues raised was whether the market/environment was ready for the new technology.



Professor Vesonder recalled a project that was impacted by this issue, AT&T's a2b music service (launched in 1997) which delivered songs to users via the Internet . At the time, the market/environment was not ready because this service was forced to compete with other services that offered the same product at no cost. AT&T's business model was therefore not viable. Today, under different market conditions, the leading legal music download service is iTunes, selling more than 50 million tracks since last April, easily outdistancing all other download services. A discussion followed on the fairness of iTune's current price scheme. After further consideration of this issue, I would like to share the following thoughts:



Have you heard "Bad Moon Rising" played on the radio lately? How about "Proud Mary" or "Born on the Bayou?" The answer is probably yes. These songs, and others from the Creedence Clearwater Revival catalog, have been played continuously, on one station or another, and under one programming format or another, in every market since they were released thirty five years ago.



Earning untold tens of millions of dollars in royalties in the process. But not for the person who wrote them, CCR frontman John Fogerty. He hasn't realized a cent from those songs for over thirty years. To escape his contract, his record label forced him to sign over the rights to his own music.



Fogerty's situation is by no means unique. In fact, this industry has left many of rock and roll's pioneers poor and forgotten, after having built its fortunes on their talents. But Fogerty's case has always struck me as the most glaring. I unavoidably think of it when the record labels and their umbrella organization, the RIAA, invoke "fairness" when discussing their policies and practices.



Nonetheless, let me make the RIAA's case, in terms its socially-conscious, politically-sensitive membership may be uncomfortable stating (if not the association itself):

-- We live in a society which recognizes intellectual property rights, and for good reason

-- Those who hold the rights to intellectual property, like any other property, are free to dictate the price at which it may be purchased

-- Those who obtain intellectual property without the consent of its rightful owner are guilty of theft, and stand to be prosecuted



Note that these stipulations are entirely consistent with the following time-honored American principles, respectively:

-- The right to private property

-- Our commitment to a free market

-- The rule of law



Using such arguments, along with the aforementioned appeal to our sense of fairness, the RIAA has embarked upon an aggressive campaign to stop file sharing, culminating, most famously, in the shutdown of the original Napster. It has also filed suit against a number of file-sharing software developers and most infamously, individual downloaders. This, in turn, has paved the way for the launch of several new for-profit download services, such as Apple's iTunes.



So, how do these services determine their pricing? No differently from the companies found in any other industry. It is extraordinarily simple:



The potential provider determines the cost that will be incurred producing or performing the good or service, and the price a consumer would be willing to pay for said good or service. If the former is greater than the latter, the good or service is not likely to be produced or performed. However, if the latter is greater than the former, the good or service is likely to be produced or performed, and at a price a lot closer to the latter than the former.



As you may have noticed, the above calculus is entirely indifferent to "fairness". And yet, the RIAA, the download services, and their defenders insist upon appealing to us on ethical grounds. So, is it fair, as they contend, to charge consumers a dollar per downloaded song?



For example, they often remind us of all the album sales the industry stands to lose with such a business model. By making this argument, the defenders seem to concede the longstanding charge that most albums contain a lot more in the way of filler than hits. Why, otherwise, would album sales drop so precipitously? Then again, why wouldn't this new option have the exact-opposite -- or at the very least, a mixed -- effect? Think of all the additional tracks the companies stand to sell, now that consumers are freed from the obligation to purchase entire albums -- heretofore a major disincentive! (Young MC, anyone? How about Crash Test Dummies?)



But in the end, if we are to gauge fairness, the issue of album sales only serves as a distraction. No, we need to concern ourselves with the actual costs incurred by the online services to deliver music to consumers, and compare it with the conventional means of production and distribution. We also must examine, in close detail, the intrinsic value of the product delivered.



Until relatively recently, we received hard copies -- on vinyl, cassette, or compact disc -- and in shrink-wrapped packaging. Artwork, lyrics, and liner notes. In short, tangible objects, which could be shared with our friends, displayed on our shelves, even autographed by the recording artists.



Regardless of the format, production required light manufacturing. Distribution was achieved in stages, involving warehouses, and the loading and unloading of trucks, until the product arrived at our local record shops. Our selections were purchased with the assistance of clerks, stationed behind counters and cash registers, and then slipped into clean, crisp bags, for the journey home.



Imagine the costs -- indeed, the sheer overhead -- associated with such a system of production and distribution. Now contrast that with the recording industry's latest business model.



It proposes to sell us music as data -- zeroes and ones. Highly-compressed, low-bit-rate files, to be specific. An inherently fragile form, really, subject to corruption, deletion, infection with malicious code, or even obsolescence. Indeed, imagine the issues which will be raised following the sudden catastrophic loss of one's entire music collection -- amassed at a cost of hundreds, if not thousands of dollars -- upon the inevitable failure of the hard disk drive.



But wait, there's more! Proprietary codecs and formats. Copy-protection schemes and digital-rights management. Limited, even device-specific transferability of files. All to be stored on computers we are expected to purchase and maintain, of course.



And as for the distribution system itself? An entirely automated process, really: a few servers utilizing off-the-shelf operating systems and running not-particularly-complicated software. Bandwidth purchased in volume. A chief site administrator, a small technical staff (maybe they can be outsourced!) and a few hundred CSRs, (they will be outsourced) and the operation is more or less complete. We log onto our accounts, make our selections, authorize payment, and -- just like that -- our purchases are digitally transferred over the Internet. With exactly half of said transfer on our dime, utilizing our broadband connections.



To summarize, never before has an industry been in a position to realize such cost efficiencies in so short a period of time. It is historically unprecedented -- far, far beyond the simple elimination of bricks-and-mortar outlets (a business model exploited long before the advent of e-commerce by catalog companies.) And at a dollar per downloaded song -- roughly comparable to the rate we have grown accustomed to paying over the past fifteen years, at twelve tracks to a conventional album, priced around $15 -- it's obvious the record labels have no intention of passing any of those savings along.



Wait, maybe they've decided to offer the artists a bigger cut instead.

(Pause.)

Okay, everyone can stop laughing now.

"FAIRNESS?" The record labels don't know the meaning of the word. Just ask John Fogerty.



But then again, "fairness" has nothing to do with it.



The only question I have is, how long before we start to see e-S&H? After all, bandwidth don't grow on trees, you know.



Tuesday, April 6, 2004

Virtually Real

This next student logbook post should provide some interesting discussion. Julia Ryan poses an interesting situation in which the simulation architecture may be driving the real architecture rather than serving as a testbed for it. Advice on how to balance the simulator with the real is appreciated. It also would be interesting to hear your simulator experiences. Thanks and later!



Julia Ryan's entry:



I recently supported a Horizontal Integration Meeting for my program where different segments of the program were presenting their status, assumptions, issues, and paths forward. One issue that was brought up from the Modeling and Simulation (M&S) group. The issue is that the simulation architecture is currently ahead of the real architecture. The worry here is that the simulation architecture may drive things that we may not want in the real architecture. I'm just curious if anyone has come across this before? If yes, how was it dealt with? Was it a major issue?



Thursday, April 1, 2004

Glowing Language

As a break from all the great student posts, I would like to call your attention to two articles from slashdot. (I am a regular reader of slashdot and it provides a wealth of information on lots of computer oriented, science and curiously interesting topics.) The first is about a language for passwords called l33t, pronounced "leet". l33t is a password language and provides you with substitution rules for letters, converting them to alphanumerics. For example in basic l33t, you substitute 4 for a, 3 for e, 1 for i and 0 for o. Note that the numbers vaguely resemble the letters. More advanced l33t, substitutes more letters with numbers and other characters. The article mused as to whether some avant garde parents were using l33t to name their children! I would be Gr3gg in basic l33t. Just a slice of software culture.



The second slashdot article is not exactly fun but is fascinating. The referenced site chronicles a motorcylce ride through Chernobyk, site of the worst nuclear reaction accident in history. The narrative and pictures are very thought provoking especially for those of us who lived through the Cold War.



I hope you enjoy the sites if you had not already seen them on slashdot. Later!



Tuesday, March 30, 2004

License to Code

One issue that may affect all of us professionally is whether software engineers should be licensed. In response to another student's post, Rafael Polanco provided a case for software licensing. What is your opinion on licensing software engineers? Later!



Rafael Polanco's entry:



These are excellent questions and ones that must be addressed. We are seeing more and more applications for software in our daily lives. To me, the future shows us running our homes via the internet using handheld devices. We would remotely control our homes by settingthe temperature, opening and closing windows and shades. Maybe we would start cooking so by the time we get home, the food is ready. We would view moving images from our office or train, etc... These things can be done today!



Software is being applied to health systems more than most people think. Not only do MRI's have to use microprocessors running software but operating room robots have to as well. There have been cases where the doctor performing the operation was on the opposite coast of where the patient was. I believe in these cases, they used dedicated lines of communication, not the internet.



Also, some of New Yorks' borough of Queens' traffic lights are dynamic. Depending on whether the US Open or The Mets are playing, the traffic lights in most of the major Avenues and Boulevards in Queens change their timing schedules. The city likes the way this have been working so much that they would like to roll this out to every traffic light in the city!



The point is that we are relying more and more on software to continue the advancement of technology in many fields. From health systems to civil engineering, software is showing its face so something has to be done to make sure the proper procedures and sound practices are used in their development. In the Dental field, doctors must take state exams to legally practice in that state. I have seen want-ads for software

programmers where the minimum requirement was that the candidate have a high school diploma.



I think that the field of software engineering has to start the creation some kind of licensure in conjunction with the law that allows you the privilege to develop and/or design software systems based on the field for which it is to be applied. I think this is the only way to force more companies to use software engineering. To call yourself a health systems software company, you should have to pass inspections. There are standards out there that are recommendations and to date the only field I know that uses them are in defense and aerospace. This is because the user, the US Government, mandates the use of these practices or else the company gets no contract.



Monday, March 29, 2004

Game Strategy

I have been a big proponent of learning from the computer gaming industry. Many industrial applications are boring and unnecessarily complex. It is difficult to make something simple, entertaining and comfortable. Ross's post provides some great heuristics for making the whole human computer interaction enjoyable. We both would be interested in your thoughts. Later!



Ross Arnold's post:



As I've mentioned in my other log book entries, I program many games for fun and for my eventual goal of running a small game company. I long ago came to a conclusion while coding games with respect to user interface - EVERY aspect of a game should be fun, not only the playing of the game. If 6 menu screens need to be traversed before the game can begin, then each and every one of those menu screens should be fun to look at, and fun to use. The buttons should be pleasurable to click on, visually appealing, smoothly functioning, and easy to use. I feel that game players often don't realize the joy they take in traversing well-laid-out game screens, instead focusing thoughts on the gameplay itself - but the joy is still present, and it still pulls people back to play a game again, even if they don't always consciously realize.



The same principle applies to my current project at work - a pocket PC used for mortar ballistic calculations. Since the screen is so small, the layout of the buttons and edit-boxes is even more important than it would be on a desktop or even a laptop computer. If a soldier is crawling along a trench in the desert, pulling out his pocket PC and plugging in a few calculations shouldn't confuse him or cause him undue stress because of a few poorly designed dialog screens.



Two web examples also come to mind: MSDN and google. If I type 'vesonder cs540' into a google search, Software Engineering Class Page instantly pops up in clear view as the 4th entry on the page, then one more click and I'm right where I want to be - it's quicker than using my bookmarks. MSDN, on the other hand, seems to be rather poorly organized (it might just be poor searching technique on my part, but isn't that what library organization is meant to counteract?). I must have spent at least 30 minutes straight the other night searching MSDN for the EULA for MS Visual C++ .NET Standard to see if I could use it to develop and publish progams for profit, and I was never able to find it (I don't suppose that you happen to know whether VC++ .NET Standard, as opposed to Professional which is $900 more expensive, can be used to publish?). In my opinion it was just absurd that the EULA wasn't readily available on the page that described the .NET software. I've had many troubles searching for similar things on MSDN, often items such as downloads that I am absolutely sure are present because I've seen them just moments earlier accidentally on MSDN, and then can't get back to the page I saw them on even if I type exact words that I saw on the page, or exact file names, etc, into the MSDN search engine. If I ever design a website for one of my games, which I'm sure I'll have to do at some point in the near future, I'm going to make sure that I have everything organized, readily available, and SIMPLE, so that I don't have people like myself bumbling about the site, lost somewhere in the bowels of cyberspace.



Sunday, March 28, 2004

More Testing in College!

Molly Campbell's entry on testing makes a great suggestion for how we can improve our Computer Science classes that emphasize coding. She asserts that testing is not emphasized in these classes and makes this observation: "Testing was something the professor did to grade your program, not something you had to focus on."



Molly is right on the mark with her observations. Software engineering practices should be used not only in industry but also in academics. Good software engineering practices should begin in school, if we have any hope of using them in history. Do you agreee? What other software engineering practices were ignored in your computer science education? Later!



Molly Campbell's entry:



As we discussed the importance of testing during lecture, I was reminded of the lack of project testing that occurred during my undergraduate studies. The absence of testing is ironic after reading how Brooks says that testing is the most important part of a project.



In school I experienced the opposite, spending almost all my time on coding. Some professors did stress the importance of planning, but I don’t ever remember anyone emphasizing testing. Granted our school projects were of a smaller magnitude where it was easier to get away with a lack of testing, but you’d think if half of your time should be spent on testing as Brooks suggests, that it would bare more weight in an academic setting.



School projects usually ended up being finished during crunch time so you were just so excited to get it done that you hardly tested it at all. Testing was something the professor did to grade your program, not something you had to focus on.



The importance of testing early and often could easily be stressed more even on the more simple projects done in school. Perhaps if it was emphasized there, it would transition to the workplace easier and we’d have more successful projects. It’s an interesting concept.



Now that I’m in the workplace, I realize that more people understand the importance of testing, but there are still similarities to my school experiences, especially when a project gets behind schedule. There is definitely still a crunch time and it seems like testing is what gets sacrificed. Even on my current project, it looks like regression testing is going to be cut in order to meet the schedule.



I think by testing early and often we will help make sure that more of our effort is spent on testing and hopefully we will benefit from getting early feedback. We should also remember the importance of testing when planning the schedule so we allot sufficient time for testing. Well tested programs will definitely provide the best results.



Friday, March 26, 2004

A dimension not only of sight and sound but of mind.

This next log book entry focuses on communication. During the class we discuss communication a lot. Brooks' Mythical Man Month is all about communication, Multics discussed the inhomogeniety of technical understanding and light methodologies such as XP stress interpersonal communication. Ed's entry squarely addresses this issue and how difficult it is. Later!



Edward Gilsky's log book entry:



Having nearly completed the first course CS540 in the Quantitative Software Engineering MS being offered by Stevens Tech, I find myself wanting to believe that some of the material we have been introduced too might actually work and make my job easier, however, my mind keeps going back to something one of my instructors said to the students in a class I was in a long time ago.



In 1982 I was taking my third programming class for my BS in CS at Florida Tech. The class was COBOL and the instructor was a nice guy who cared aboutwhat he was teaching and his students, so he would often discuss the "why's" of what he was saying instead of just the "how's". A student brought up a problem he was having with another instructor who didn't like the way he had coded something, even though it seemed to work. The student was frustrated because he couldn't get across to the second instructor why he had written the code the way he had. He asked the COBOL instructor how to make it clear, and the COBOL instructor said "Never defend your work". That's all, just "Never defend your work, your just wasting your time". That didn't go over to well with the class, we were very defensive of our creations and generally took it personally when someone criticized them. We decided that the COBOL teacher was just old and tired, much like the language he taught.



Over the years I began to learn that he seemed to be absolutely right. I still couldn't figure out why, but nine times out of ten, when I tried to defend (or even explain) what I had done and why (in particular to

managers), the end result was that things were worse than before I had done so. Eventually I learned to avoid explaining what I was doing, even though I didn't understand why it was necessary to do so.



After some years more I considered that maybe the problem was not the people I was talking to, but me and my inexperience. With more experience I might be able to do better, so I tried again. Didn't work. Things I said just disappeared into the void or mutated into utter nonsense. When something I said made its way back to me, my general response was "I said what!? I don't even understand what that means!". It bugged me that I was apparently completely incapable of communicating this particularly important piece of information, when in all other parts of my life I seemed to be able to explain myself fine. So I thought about it some more and decided, it's not my fault. When a developer speaks to a manager (and sometimes another developer), no matter what the developer says, it comes out as babble. No matter how eloquent you are, no matter how well you have your facts together, no matter how well you know the subject matter, all they hear is "Wa wa wa wa wa" (Charlie Brown teacher noise here) because, in the end, they don't have the slightest idea what your talking about! So, they think "I didn't understand that, therefore, it didn't make any sense, therefore, this person doesn't know what they're talking about, therefore, they're incompetent and trying to snow me". I've decided, once again, "Never defend (or explain) yourself, it's just a waste of time". Maybe even to include all this process stuff.



Is this just my particular twilight zone or do other people experience it too?



Thursday, March 25, 2004

Change Management

In class I discussed the value of having a transition agreement when you are required to complete items on an old project while starting a new one. It is in the best interest of both the old and new projects and their management and, most importantly, it usually saves the Developer's sanity. Julie Robbins provides a great case study of what happens when you can't get this sort of agreement. Anyone else have similar experiences? Later!



Julie Robbins entry:



In class we have discussed transition documents that allow the developer to cleanly move from one task to another. I think this is something that every developer has experienced and it's exactly what I needed in my last job. I was the team leader and main developer on a project before my family and I had to move to Florida. My company had an office there so I worked remotely on the project for a while. The powers-that-be thought it would be great for business if I began work on a new project they were trying to kick-off there in Florida as well.



So I began designing an application, working from a set of requirements created by someone else. I soon learned that the requirements didn't have the buy-in from all the stakeholders and that the IT support at the site was ineffective. As I tried to meet with the customers and determine where the holes were, the original person on the project who wrote the requirements left the company.



So there I was, in a chaotic environment with way too many variables that were out of my control. And.. I was still supporting the previous product by visiting customer sites in Florida and flying back and forth to meet with my old team to discuss new development.



Although a transition document would have been ideal in this situation, I'm afraid that in reality the voice of politics did not allow it. Management in both locations was in competition for new business and visibility to the main office. I didn't know it until too late but I was caught in the middle with no hope for an easy way out. Over the years, I've been able to see it clearly happen to others as well. The developer is, more often than not, the powerless pawn in the political side of software development.



Tuesday, March 23, 2004

Is it safe?

As the semester ends log book entries are picking up. Mike Bonastia in his entry asks the question of whether Windows 2000 is appropriate for safety critical applications. I suspect Mike may get some interesting comments! (The title of this entry comes from a line in the movie, Marathon Man.) Later.



Mike Bonastia's entry:



One issue that comes up in industry deals with what type of operating system should a safety critical applications run on. There are operating systems that are commercial off the shelf that guarantee that safety (safety certificates), but they can be rather expensive. The other cheaper options are to go with a UNIX flavor or a Windows flavor. UNIX systems offer a variety of customizable features that can help make a safe environment. Window systems are less customizable but are a cheap fast solution. Microsoft will not guarantee that it is a safe environment or at least they won't support or claim responsibility if something goes wrong. Is it possible to make windows a safe environment for a safety critical application? Some simple practices could be:

- Not allow the user to have access to the OS.

- Have only the one application running.

- Disable services that are not going to be needed.

- Use a wrapper or a piece of middleware that will isolate the application from the OS.



With this in mind, I am interested to see how people feel about this issue(i.e. WIN2000 being used as an OS for a safety critical application).



Monday, March 22, 2004

Another Sharp Question

C# certainly seems to be a hot topic these days. Ankita Parikh, a student in the web version of the Intro to Quantitative Software Engineering course, has posed this great question. Your help and feedback is appreciated. Later!



Ankita's question:



I am currently working on a project for my senior design class and we are using ASP.NET and C# for the project. My question is whether you knew of any formal coding conventions that I could use for these languages. I know there are coding conventions for C/C++...I've found hungarian notation and several others. But they didn't really help me. I am not even sure if there is anything available for .NET and C#.



Sunday, March 21, 2004

Use Cases in Testing

This blog entry is from Pawel Wrobel, a current student in my CS540 class, Introduction to Quantitative Software Engineering at Stevens. It is a great illustration of how versatile use cases are. I hope you enjoy it. Later!



Use case modeling is not only a good way to collect and analyze functional requirements of a system but it also greatly facilitates testing. On my previous project our group was responsible for elicitation and validation of requirements for a new system. We decided to employ use case modeling. Preliminary requirements were obtained from our subject matter experts and use case model was created. Consequently the use case model was presented to the prospective users. We found out that users really liked working with the use cases, as they were easy to understand and they could play out different scenarios of their interaction with the system. After numerous iterations we arrived at our final use case model that we included in our requirements specifications and submitted to our contractors for further development. At the same time, based on the use case model, we began developing our test cases for formal acceptance testing.



Saturday, March 6, 2004

Star (language) Wars

This is a logbook entry from Minh Tran, a student in my Introduction to Quantitative Software Engineering Class. Logbook entries, for readers who have not been in my classes, are reflections on software engineering topics done on a weekly basis as part of the course requirements. I feel that maintaining a logbook is an important mechanism for learning over your professional career and want to get my students in the habit of maintianing one. Minh was responding to an entry I provided in class that suggested that one sure way of getting develolpers into a debate is to ask what their favorite programming language is. We discussed reasons why developers felt that way which included issues of support, relative performance, first language learned ... Minh expands this discussion to issues of Microsoft's support for languages and asks for some input from our readers. Later.



Minh Tran's logbook entry:



In class 9, Professor gave us an entry about "Star (languages) Wars" : C++, JAVA. Fortunately, I had an

opportunity to work on JAVA, HTML, ASP, SQL on Microsoft, and C++, MQ Series, Oracle on UNIX/LINUX for the last couple of years. Currently, I work on new projects that use Visual C++6, and Access for database. 80% of coding is done and being unit tested. The software release date is near. As I know of, Microsoft stopped supporting Visual C++ and has a new tool Visual.Net using XML & ADO.Net for data access. Few weeks ago, one of the projects' leaders mentioned about the new tool to me, he wanted to migrate standard C++ to .Net. I did not know what to tell him, because I have no knowledge about .Net, neither the team.



Should we stay with Visual C++, or use the new tool .Net or JAVA ?



Your inputs are greatly appreciated.



Thursday, March 4, 2004

Far Out Architechture

This entry follows up on a previous post with regard to NASA's efforts in software architecture and design (Software for the Stars). I am now reading through the articles referenced and found a gem by Dvorak, Rasmussen, Reeves and Sacks, Software architecture themes in JPL's mission data system. You can get the paper here.



This paper is a great discussion of an architectural approach using a state-based architecture. The authors detail their efforts to decrease coupling between modules through using coordiantion services and the examples they provide are easy to understand and serve to illustrate nicely the architectural point emphasized. They also introduce in the paper 13 themes of the architecture describing each and illustrating by example. Some examples of these themes are: construct subsystems from architectural elements, not the other way around; system state and models form the foundation for monitoring and control; express domain knowledge explicitly in models rather than implicitly in program logic; for consistency, simplicity and clarity separate state determination logic from control flow; design interfaces to accommodate foreseeable advances in technology etc. All of this is then modeled in the UML.



This paper is highly recommended. I am definitely using it in my architecture and design class and, since the writing is so accessible, I may even use it in the intro to software engineering class. If any of you get a chance to read (its other nice attribute is that it is short, 6 pages), I would be interested in your comments. Later!



Wednesday, February 25, 2004

SCENE Conference

I gave a talk at the SCENE Conference in Philadelphia on Saturday, February 21st. SCENE stands for Student Chapter Event - Northeast, and it was an all day event where the ACM student chapters from universities in the Northeast met and attended presentations on a central theme (AI this year). This is the abstract:



Artificial Intelligence has been an active part of the computing industry for three decades. I will provide a personal view of AI in industry over that time, highlighting both the engineering and research aspects, compare it to AI at the university and offer advice for anyone making the transition from the university to industry. I also will speculate on the future of AI in industry.





You can download the pdf here. Rather than starting a new blog for AI I am adding it to this blog under the category artificial intelligence.



Tuesday, February 24, 2004

Coding Standards

In our last class I mentioned that it is useful to get developers to agree on coding standards, so they are not constantly arguing about indentation styles and naming conventions for variables, classes, methods, ... Lynn Rider's entry which follows shows that something that is well intentioned when taken to the extreme can result in a miserable experience! This is a great (and well told) example of how we can all learn from our experiences. Thanks Lynn!



In Monday night's class, you were talking about how coding standards should be defined 'well' in the beginning, before any coding is ever done. You made a claim that this can save you 3 weeks of time in the long run, as developers love to hash out what is better, more efficient, readable, etc...Although I can see the value in what you said, this brought to mind something that I remember from my internship 2 years ago.



I was working for a company for a few months to fulfill my internship credits for graduation. The company is very well known and takes on mostly defense contracts. When I first got there, my very first day, they handed me two documents to read. The first was their CMMI folder, they happened to be a Level 5. The second they handed me was and Ada Coding Standards document, that was given to them by the government as a standard way to write Ada code when they are working on defense contracts. I was impressed that they were on a level 5 and that they were so precise in how someone should write any little bit of code. I actually enjoyed reading the second book, as it not only told you how to do something, it told you why they wanted you to do it in that manner.



Down the line however, I ended up doing regular code reviews with the other members of our team. When we did code reviews, everyone on the team was involved, as well as the Software Verification people. These usually were slotted for 2 hours at a time, and page by painful page, we would go through all code changes, unit tests, and anything else someone wanted to bring up. If we weren't finished by the end of the 2 hours, we would schedule another 2 hours the next day, and would do this until it was done. There was one particular man in our group, that would sit there and pick apart every last thing. Sometimes he was correct, and sometimes I just think he wanted to be correct. Either way, there were many days I fantasized of that famous Looney-Tune 'anvil' to miraculously drop out of the sky and fall on his head, just so we could move on, often times more than once a day. He was a stickler for these Ada Coding Standards, and although most of the time he was right, there were many times I would bring this book to the meetings, just to stop the 'small war' between the developers. Was it the person? Was it the strict standards imposed by the government? Was it a little of both? And... was this good for us in the end?



For the most part, to me, keeping a standard in place seemed/seems like good idea, but when someone made it their goal in life to argue for 15 mins about the tense of wording in comments or capitalization of the first letter of an index on a for loop or he thought this name would be better suited for one variable or another... what I thought initially was a good thing, turned out to be burdensome and torturous, to say the least. I don't know that we saved time having coding standards in place or not in the long run, all I know is that it gave us more to disagree about, and unfortunately the 'anvil' never came to our rescue. :-)



Saturday, February 7, 2004

New Kind of Science

For those of you that do not read slashdot (and if you do not it is highly recommended and can be accessed here), one of the items yesterday was announcing that Stephen Wolfram's book, A New Kind of Science, is accessible for free here. All it requires is a short registration. The book is fairly recent and huge (1192 pages). Wolfram's concepts have elicited strong reactions. As the title suggests he does have a touch of ego. Nowithstanding he is one of the intellectual mileposts of our generation and it is worth exploring his ideas. Wolfram also is the creator of Mathematica, a superb but pricey math system and programming environment. Check it out. As I trudge through it, I will have more to say about it and also others reaction to it. Later.



Thursday, February 5, 2004

Software for the Stars

Greg Horvath in his Comment on Requirements Innovation cited an excellent article. The article is on NASA's software engineering efforts and was in the January issue of Computer. It can be found here. The article discusses a wide ranging set of efforts including formal specifications, code generation from specifications, state-based control architecture and human engineering of code.



Starting next week I will review some of the techniques listed in the article and provide you with resources to explore them further. I think the NASA work is particularly interesting since it is one of the few current attempts to integrate the full range of software engineering techniques into production. Until then check out the article and let me know your thoughts on which techniques you would like to explore in more depth. Later.



Saturday, January 31, 2004

Evolving Architecture

One of my former students asked a great question on architecture and architecture reviews. The question, the answer, and some further elaboration by Joe Maranzano, an expert in architecture and architecture reviews follow these short notes.



Establishing a discipline of architecture reviews relatively early in the software development process is a great idea. The paper by Starr and Zimmerman (get it here) “A blueprint for success: Implementing an architectural review system.” www.stqemagazine.com, 2002, is a great introduction to architecture reviews. I will keep you posted on further sources of information on architecture reviews.



Enough background, here's the question, answer, and Maranzano's additional comments.



1. Does any sort of review only occur after the particular task reviewed is completely finished? That is, after the architecture review is done is it still possible to add to or change the architecture .. or should that be avoided?



Of course if the review suggests changes or deficiencies the architecture should be adjusted to accommodate them. Tuning can still be done until design, any changes after that have a ripple effect -- the design has to be changed and code and ... . Bottom line the general architecture should not change after the review but knowledge during the design/development process may cause adjustments. And there are exceptions, if you are using very new technology the architecture may change as you discover how it really works during development. We sometimes refer to that type of project as advanced development, and generally use our top staff on it. I think the key is that the architect (and there only should be one) should shepherd her architecture through development and testing -- so a simple answer is the architect should not change, the architecture may change but this should be done carefully.



Joe added: "You might want to mention that if some portion of the architecture has to change (e.g., due to design discoveries or economic impact of the solution) that a review of that portion of the architecture can and should be done."



Hope you found this interesting and, as always, comments are welcome.



Later.



Saturday, January 24, 2004

The NeverEnding Story/Software

In the past ten years I have noticed a new class of software projects emerging. For lack of a better term I call them organic projects, since they take on a life of their own -- they really have no real endpoint and releases, when they occur, are a matter of convenience not a potential endpoint. Often these projects go through continuous release and evolve not only as the understanding of the requirements evolve, but also grow because they create new capabilities, that inspire a new burst of requirements, that create new capabilities, that inspire ... . Data collection and data mining projects fall into this category as do other internal projects that support corporate operations. The web as a whole is an evolving meta project of this class, where requirements evolve through use. Many Open Source projects follow a similar path but often their evolution is inspired by a changing developer and user base. Indeed I see more projects falling into this category as software permeates more of our lives and more individuals have the capability to author and modify software. Another defining attribute of this software is that it normally lasts decades (if not longer since software has only been around for decades!).



These projects are not well accommodated by current Software Engineering models, theory and practice. Agile methods come close, but they do not seem to consider long time scales. Incremental and spiral models, although spawning lots of releases, are workng to an end -- in incremental you prioritize the features and in spiral you do that plus risk assessment. Similarly design, development, testing, maintenance, (name your own) are affected.



So what do you think? Is this a new class of software and do you have other (better) examples of this? Is it worth beginning a thread on this blog that discusses this class of software and thoughts on the software engineering of these evolving systems?



Later.



Wednesday, January 14, 2004

Requirements Innovation

Has innovation in requirements kept pace with innovation software process, design, analysis or even testing? I do not think so. Sure there is some innovation in requirements elicitation, especially with the use of prototyping, the Delphi method, ... but I see very little innovation in requirements presentation and use. In this age of hypertext and graphics I would expect more in presentation. Similarly in use, about the only thing I can point to is Parnas' active reviews (In D.M. Hoffman and D.M. Weiss Software Fundamentals: Collected Papers by David L. Parnas, Addison-Wesley, 2001, ISBN: 0-201-70369-6, Chapter 17. -- if you do not have this book and you are or aspire to be a software engineer, you should put it on your "books to read list", if not "books to purchase when I have enough money" list.).



Granted that life cycle tools have integrated requirements into their workbenches but there is no scheme for the rest of us.



I hope to make this one of the tracks we explore on the blog. We spend so much time grappling with and relying on requirements, yet we do not seem to spend enough time trying to improve them and their use.



So, for discussion, am I wrong? Is there innovation in requirements presentation and use, and if so where? If there is not innovation in requirements presentation and use, where should we begin? Perhaps one place to begin is to suggest publicly available instances of what you consider great requirements or attempts to be innovative in representing and using requirements. I will try to do the same. Later.



Thursday, January 8, 2004

Proto Spasms

In our CS540 class Monday we discussed prototyping. One recurring issue is how members of the team can prevent promotion of the prototype to a product. Recall that most prototype schemes are designed to support requirements elicitation and after the prototype is completed the appropriate course is to formalize requirements and then begin design. Unfortunately prototypes often take on a life of their own and sometimes the developers contribute to it.



So the question is how can we insure that prototyping is not misused. My answer in class was a strategic one involving architects and architectural reviews. However, in organizations where reviews are not in place (and may never be in place) what can teams do to prevent misuse?



For the contrarians out there, do you know of instances where promoting a prototype (excluding evolutionary prototypes) to a product was a good thing? I appreciate your input! Later.



Saturday, January 3, 2004

Redesign

Return visitors will notice that I have redesigned the site. This includes making it more consistent with Stevens colors. I would appreciate your comments on the site.



My New Year's resolutions include posting at least once per week on this site. I know there was a bit of inactivity during the holidays but I hope to increase the activity in the coming months.



If you find value to this site, tell your friends AND contribute to the site. I will be happy to post new items if you email them to me and of course give you credit fo rthe post. Of course you can always comment on the posts.



Advancing Software Engineering will always be a community effort of all the constituencies: practitioners, researchers and stakeholders. Later.



The Unexplored Universe

This is a request for feedback from students who have taken my Introduction to Quantitative Software Engineering Course, CS540 for those of you who prefer numbers. However I encourage others to contribute, since I will provide the current list of topics I introduce in the course.



Currently the course covers: Software Process Models, Software Requirements, Capability Matrurity Model and the Software Engineering Institute, Managing Software Requirements, Software Project Planning, Software Requirements, Metrics, Estimating Effort, Risk Analysis, Scheduling, Softwar eProject Reviews, MULTICS case study, Quality Assurance, Development Standards, Configuration and Build Management, Testing, Systems Engineering, Analysis and Modeling, Architecture, Design, Supporting Techniques (Problem Solving, Meeeting Methods, Negotiation, Management management), Object-Oriented Analysis, OO Design, OO Testing, OO Metrics, Light vs. Heavy Methodologies Including XP, Open Source, Gaming, Softwar Factories, Computer Human Interaction, Formal Methods, Clean Room Development, Fault Tolerance, Software Archeology. Phew. The textbooks include Brooks Mythical Man Month and van Vliets Software Engineering.



My goal is to touch on each of these subjects and provide pointers so that students can investigate the topics further on their own. So I am reticent to exclude topics and would appreciate pointers to topics in Software Engineering that I have missed. One bit of feedback last semester was that I should spend more time on Quality.



Are there other topics that I am missing? Are there some topics that I should not cover? Do you have ideas for other classic case studies? One topic I am considering adding next term is Out Sourcing of Software -- would this be helpful?



Thanks for your help! Later.