RUS | UKR | ENG | ÄîíÍÒÓ> Ïîðòàë ìàãèñòðîâ ÄîíÍÒÓ
ÏÎËÎÑÀ ÔÎÒÎÃÐÀÔÈÉ ÈÇ ÐÀÇÍÛÕ ÏÅÐÈÎÄÎÂ ÌÎÅÉ ÆÈÇÍÈ

Ãëàâíàÿ Ìàòåðèàëû: Ðåôåðàò | Áèáëèîòåêà | Êàòàëîã áèáëèîòåêè |Ññûëêè | Îò÷åò î ïîèñêå

Email: kseniya.mos@gmail.com

Áèîãðàôèÿ
Ðåçþìå
Ïóáëèêàöèè
×ëåíñòâî â AIESEC
Êîíòàêòû

WHAT DO YOU MEAN YOU CAN'T TELL ME IF MY
PROJECT IS IN TROUBLE?

www.dacs.dtic.mil/awareness/ newsletters/technews2-2/fesma.pdf
by
Joseph Kasser, DSc. , CM., CEng.
University of Maryland University College
University Boulevard at Adelphi Rd.
College Park, MD 20742-1614
Phone 301-985-4616, fax 301-985-4611
E-mail: jkasser@polaris.umuc.edu

Victoria R. Williams
Keane Federal Systems, Inc.
1375 Piccard Drive, Suite 200
Rockville, MD 20850
Phone 301-548-4450, fax 301-548-1047
E-mail: vwilliam@keane.com, vrwcbw@erols.com

ABSTRACT

All the measurements made during the Software Development Life Cycle (SDLC) do not provide an accurate answer to the question. Anecdotal evidence suggests that most projects do not fail due to the non mitigation of technical risks. Rather, they fail as a result of poor management of the human element. This paper describes the development of a set of risk-indicators based on the human element. These riskindicators can be further refined into metrics to provide an answer to the question posed in the title of the paper.

INTRODUCTION

The SDLC for large systems can take several years to complete. During this time, the:

The growing international dependency on the ISO standards for the SDLC indicates that this phenomenon of software project failure is not limited to the United States. Anecdotal evidence suggests that most projects do not fail due to technical reasons. Rather, the failure tends to be due to the human element. In addition, while the Standish Group identified ten major causes for project failure along with their solutions, they also stated that it was unclear if those solutions could be implemented (Voyages, 1966). This paper describes the development of several indicators that can be used to identify metrics to predict that a project is at risk of failure.

A METHODOLOGY FOR DEVELOPING METRICS FOR PREDICTING RISKS OF PROJECT FAILURES

The methodology is based on Case Studies written by students in the Graduate School of Management and Technology at the University of Maryland University College1 . These students wrote and presented term papers describing their experiences in projects that were in trouble. The papers adhered to the following instructions:


The methodology:
Summary of Student Papers

Nineteen students produced papers that identified 34 different indicators. Each indicator identified was a risk or a symptom of a risk that can lead to project failure. Several indicators showed up in more than one student paper; Apoor requirements@ showed up in all of the papers.

The Survey

A survey questionnaire was constructed based on the student provided risk-indicators2 and sent to systems and software development personnel via the Internet. The survey asked respondents to state if they agreed or disagreed that the student provided indicators were causes of project failure.3 One hundred and forty-eight responses were received.

The findings are summarized in Table 1. The first column contains a number identifying the riskindicator described in the second column. The third column lists the number of students that identified the risk. The fourth column contains the percentage of agreement. The fifth column contains the percentage of disagreement. The sixth column is the ranking of the risk-indicator.

Survey Results

The survey results were surprising. Modern Total Quality Management (TQM) theory holds that the Quality Assurance Department is not responsible for the quality of the software. Everybody shares that responsibility. Thus, while it was expected that most respondents would disagree with this risk-indicator, only 60% of the respondents disagreed. It was also anticipated that most respondents would agree with the other risk-indicators, yet the overall degree of agreement was:
0.7% (one respondent) agreed with all 34 risk-indicators.
8.1% agreed with at least 30 risk-indicators.
51% agreed with at least 20 risk-indicators.
93% agreed with at least 10 risk-indicators.
As for the degree of disagreement:
0.7% (one respondent) disagreed with 25 risk-indicators.
4.7% disagreed with at least 20 risk-indicators.

Note 1: The papers were written for a class on IV&V, hence the emphasis on IV&V. However, if the descriptions of tasks that IV&V should have performed (in the papers) are examined, the word AIV&V@ could easily be replaced with the word "systems engineering" and the papers would be equally valid.

52% disagreed with at least 10 risk-indicators.
88% disagreed with at least one risk-indicator.

Further Analysis

The top seven (high priority) risk-indicators were identified using the following approaches:

These results show a high degree of consensus on these risk-indicators as causes of project failures.

Sensitivity Analysis

The sample size for respondents without management experience was 99. The raw tallies for the riskindicators listed below were examined to see if there was a difference between non managers and managers with various years of experience. No differences of more than 10% were noted.
5 Lack of, or, poor plans
8 Low morale
15 Failure to collect performance & process metrics and report them to management
25 Lack of management support
27 Lack of understanding that demo software is only good for demos
29 Political considerations outweigh technical factors
32 There are too many people working on the project
33 Unrealistic deadlines - hence schedule slips

The "Other" Category

Several respondents added a small number of risk-indicators in the Aother@ category of the questionnaire.

These included:

Thus, the small student sample size of 19 seems to have identified most of the important risk-indicators.

The Risk-Indicators Most People Disagreed With

Part of the analysis of the survey results was to determine which risk-indicators received the most amounts of disagreement as causes of project failure. This was done by determining the:

The risk-indicators receiving the largest number of disagreements were:

The following risk-indicators received the least number of agreements as causes of project failure:

In each method of analysis, six risk-indicators showed up in the group receiving the most amount of disagreement. Consider the risk-indicators most of the respondents disagreed with, namely:

THE CHAOS STUDY

The (Chaos, 1995) study served as a reference. It had identified some major reasons for project failure. The five risk-indicators in this study that were chosen as the most important causes for project failure also appear on the Chaos list of major reasons for project failure. The correlation between this study and the Chaos study is shown below. While Aresources are not allocated well@ did not show up in the top seven lists of this study, it was fourth in the tally. Thus, this study supports the findings of the Chaos study.

PRESENCE OF RISK-INDICATORS IN ISO 9001 AND THE SOFTWARE-CMM

The elements of Section 4 of the ISO 9001 Standard and the five levels of the Software-CMM (CMM, 1995) were examined and interpreted to determine if the major student risk-indicators were covered in the ISO Standard and in the Software-CMM. The ISO 9001 Standard defines the minimum requirements for a quality system, while the Software-CMM tends to address the issues of continuous process improvement more explicitly than does the ISO 9001 Standard. The findings are shown below where an >x' represents the presence of the indicator. The same two major risk-indicators could not be mapped into either the elements of Section 4 of the ISO Standard, or the Software-CMM, namely:

Thus, conformance to either or both Quality standards does not ensure mitigating these risks.

THE DEVELOPMENT OF METRICS TO IDENTIFY THE PRESENCE OF THESE INDICATORS

The large consensus on the major reasons for project failure seems to show that if we could remove these reasons, projects would have a greater probability of success. The (Chaos, 1995) study showed that projects tended to succeed if the opposite of these risk-indicators were present (e.g., good requirements instead of poor requirements). The (Voyages, 1996) paper stated that the causes were known, but it was unclear if the solutions could be implemented. Thus it seems that the current metrics paradigm is focused on measuring the wrong things and needs to be changed to develop metrics to show the presence or absence of the major risk-indicators identified above and any other known as major causes of project failures (risk management). Consider ways metrics can be developed for the following riskindicators:

However, just changing the metrics paradigm may not be the complete solution. Cobb's Paradox (Voyages, 1996) states AWe know why projects fail, we know how to prevent their failure B so why do they still fail?@ Now a paradox is a symptom of a flaw in the underlying paradigm. Perhaps Juran and Deming provided the remedy. Juran as quoted by (Harrington, 1995, 198) stated that management causes 80 to 85% of all organizational problems. (Deming, 1993, 35) stated that 94% of the problems belong to the system (i.e., were the responsibility of management). In this survey, both managers and non managers tended to disagree with the two management risk-indicators (#9 and #10). Several respondents with many years of systems or software engineering experience did not even recognize the term ASDLC.@ It is difficult to understand how Information Technology managers can make informed decisions to mitigate technical risks if they donOt understand the implications of their decisions. The resolution of CobbOs paradox may be to develop a new paradigm for an information age organization which performs the functions of management without managers.

DEFICIENCIES IN THE STUDY

The following deficiencies are present in the study:

CONCLUSIONS AND RECOMMENDATIONS

Except for poor requirements, none of the risk-indicators identified by this study are technical. Thus, the findings support:

Areas For Further Study

AUTHORS

Dr. Kasser has more than 25 years of award winning experience in management and engineering. He teaches software IV&V and software maintenance at the University of Maryland University College. He is a recipient of NASAOs Manned Space Flight Awareness (Silver Snoopy) Award for quality and technical excellence. He is a Certified Manager and a recipient of the Institute of Certified Professional ManagerOs 1993 Distinguished Service Award. He is the author of Applying Total Quality Management to Systems Engineering published by Artech House and more than 30 journal articles and conference papers. His current interests lie in the areas of applying systems engineering to organizations and using technology to improve the practice of management.

Victoria R. Williams is a Senior Consultant at Keane Federal Systems, Inc. in Rockville, MD. She has 18 years of award winning experience in various aspects of systems engineering. She has received many awards and commendations from employers and customers including Computer Data Systems Inc., the Department of the Navy, Naval Air Systems Command, and the Department of the Army. She has written software in HTML, PowerBuilder, Java, FoxPro and Visual Basic. She has performed objectoriented analysis and design in an effort to reengineer an operational legacy system. Her previous experience includes project management support for the Department of the Navy, systems and software engineering for the U.S. Navy, the Royal Australian Navy, and the U.S. Military Sealift Command. She is also trained at CMM Level 2 and is currently working on her Master of Science Degree in Computer Systems Management in the Graduate School of Management and Technology at the University of Maryland University College in College Park, Maryland.
ÄîíÍÒÓ> Ïîðòàë ìàãèñòðîâ ÄîíÍÒÓ> Ðåôåðàò | Áèáëèîòåêà | Ññûëêè | Îò÷åò î ïîèñêå

Email: kseniya.mos@gmail.com