UGC NET STUDY MATERIALS

software project management quality assurance

August 2010
Master of Computer Application (MCA) – Semester 5
MC0084 – Software Project Management &
Quality Assurance – 4 Credits
(Book ID: B0958 & B0959)
Assignment Set – 1 (60 Marks)

Answer all Questions Each question carries FIFTEEN Marks

Book ID: B0958

1. Explain the following theoretical concepts in the context of Project Management:
A) Factors influencing project management
Project management is often summarized in a triangle. The three most important factors are time, cost and scope. These form the vertices with quality as a central theme.


1. Projects must be delivered on time.
2. Projects must be within cost.

3. Projects must be within scope.

4. Projects must meet customer quality requirements.

More recently, this has given way to a project management diamond, with time, cost, scope and quality, the four vertices and customer expectations as a central theme. No two customers' expectations are the same. So you must ask what their expectations are.

Project Manager
Project Manager is overall responsible for the successful planning and execution of a project. This title is used in the construction industry, architecture, information technology and many different occupations that are based on production of a product or service. The project manager has many tasks:
• Planning
• Staffing (acquiring human resource)
• Execution (putting the plan into action)
• Monitoring the progress of the project

After the above tasks, the project managers in an organization are responsible for more than managing individual projects. Their responsibility spans the overall organizations life cycle. Managers continuously monitor and assess the capabilities of the organization. For those, managers follow CMM (Capability Maturity Model) model. CMM can also be used to assess software organizations as part of software acquisition policy and to qualify contractors by requiring them to be certified according to CMM maturity levels.

The CMM (Capability Maturity Model) for software is a framework that was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University by observing the best practices in software and other organizations. CMM reflects the collective process experience and expectations of many companies. One objective of the CMM is to distinguish mature processes from immature, or ad hoc, processes. Immature software processes imply that projects are executed without many guidelines, and the outcome of a project depends largely on the capability of the team and the project leader. On the other hand, with mature processes, a project is executed by following defined processes. In this case, the outcome of the project is less dependent on people and more on the processes. It follows, then, that the more mature the processes, the more predictable the results and the more well controlled the projects. Hence, the CMM framework describes the key elements of software processes at different levels of maturity. Consequently, it also specifies the path that a software process follows while moving from immature processes to highly mature processes.


Project Management activities:
Project Management is composed of several different types of activities such as:

1. Planning the work or objectives: A manager must decide what objectives are to be achieved, what resources are required to achieve
Software Project Management and Quality Assurance Unit 2 Sikkim Manipal University Page No. 19

the objectives, how and when the resources are to be acquired and how the objectives are achieved.

2. Assessing and controlling risk (or Risk Management): Risk is associated with several issues. It can be technical, methodology or financial one. Manager needs to plan from the starting of the project, to handle unexpected or sudden occurrence of risks.

3. Estimating resources: Resource estimation is another crucial task of the project manager. A resource can be software, hardware, human personnel, capital etc. Resource estimation involves the planning of required resources for the given tasks in the given period of time. Optimum utilization of these resources is the ultimate goal of the manager.

4. Allocation of resources and assigning tasks: This involves identification of task and allocation of required resources to fulfill the given task. For example, identification of skilled personnel to solve the given task.

5. Organizing the work: Organizing involves clear lines of authority and responsibility for groups of activities that achieve the goals of the enterprise.

6. Acquiring human resources (staffing): Staffing deals with hiring personnel, which involves recruiting, compensating, developing and promoting employees.

7. Directing activities: Directing involves leading subordinates. The goal of directing is to guide the subordinates and to understand and identify the organizational structure and goals of the enterprise.

8. Controlling project execution: Controlling consists of measuring and correcting activities to ensure that the goals are achieved. Controlling requires the measurement against plans and taking corrective action when development occurs.
9. Tracking and reporting progress: After assigning the tasks to the team members, it is essential to track and monitor the work progress. The work progress is documented at regular intervals.

10. Forecasting future trends in the project: The project must be designed to facilitate extensibility of new features in the forth coming days. This is very crucial task of manager or designer. Designers have to keep this point in mind, while designing architecture for the system.

11. Quality Management: Satisfying the customer requirements is called quality. Quality reflects in many ways. It can be through functionality, performance and external factors like portability etc. So the project manager needs to implement different quality management techniques from the analysis phase itself.

12. Issues solving: An issue can be a conflict among the team members, sudden increase in the attrition rate of employees, sudden drop in rupee value etc. Based on the issues, proper corrective action needs to be taken to ensure the smooth working of the system.

13. Defect prevention: A defect is a flaw in the system. It is more serious than an error. A defect occurs because of improper design, poor quality etc. A thorough testing is needed before and after implementation of the product, to avoid the defects.

14. Project Closure meet: Project closure describes the overall project details. The details can be conveyed through closure reports. Ex. Performance reports, testing reports and project completion reports.
Software Project Management and Quality Assurance Unit 2 Sikkim Manipal University Page No. 21

Controlling: Controlling consists of measuring and correcting activities to ensure that the goals are achieved. Controlling requires the measurement against plans and taking corrective action when development occurs.

Stakeholders Stakeholders are all those groups, units, individuals, or organizations, internal or external to our organization, which are impacted by, or can impact, the outcomes of the project. This includes the Project Team, Sponsors, Steering Committee, Customers, and Customer co-workers who will be affected by the change in customer work practices due to the new product or service.



B) Project Communication:


There are many reasons that software projects get into trouble. The scale of many development efforts is large, leading to complexity, confusion, and significant difficulties in coordinating team members. Uncertainty is common, resulting in a continuing stream of changes that ratchets the project team. Interoperability has become a key characteristic of many systems. New software must communicate with existing software and conform to predefined constraints imposed by the system or product.
To deal with them effectively, a software engineering team must establish effective methods for coordinating the people who do the work. To accomplish this, mechanisms for formal and informal communication among team members and between multiple teams must be established. Formal Software Project Management and Quality Assurance communication is accomplished through “writing”, structured meetings, and other relatively non-interactive and impersonal communication channels. Informal communication is more personal. Members of a software team share ideas on an ad hoc basis, ask for help as problems arise, and interact with one another on a daily basis.

Formal, impersonal approaches include software engineering documents and deliverables (including source code), technical memos, project milestones, schedules, and project control tools, change requests and related documentation, error tracking reports, and repository data.

Formal, interpersonal procedures focus on quality assurance activities applied to software engineering work products. These include status review meetings and design and code inspections.

Informal, interpersonal procedures include group meetings for information dissemination and problem solving and collocation of requirements and development staff.

Electronic communication encompasses electronic mail, electronic bulletin boards, and by extension, video based conferencing systems.

Interpersonal networking includes informal discussions with team members and those outside the project who may have experience or insight that can assist team members.

The following fig shows the parties involved during project work.



C) Statement of Work (SOW):


Large and complex systems require that detailed work requirements need to be written containing "what is to be done" in definitive and precise language and terminology. The purpose of a SOW is to detail the work requirements for projects and programs that have deliverables and/or services performed. There are five types of SOW (one for each phase of the acquisition life cycle) during the system life cycle as identified by the Systems Engineering Management Plan (SEMP). The SOW covers the work requirements and in conjunction with applicable performance/design requirements contained in specifications used for contractual agreements. Any proposed supplier can submit a proposal based on his perception of the needs as defined by the SOW, thus enabling a fair price for goods and/or services to be provided.

2. Explain the Software Cost estimation methods and the COCOMO model.


Software project management begins with a set of activities that are collectively called project planning. Before the project can begin, the manager and the software team must estimate the work to be done, the resources that will be required, and the time that will elapse from start to finish. Whenever estimates are made, we look into the future and accept some degree of uncertainty as a matter of course. To quote Frederick Brooks [BRO75], although estimating is as much art as it is science, this important activity need not be conducted in a haphazard manner. Useful techniques for time and effort estimation do exist. Process and project metrics can provide historical perspective and powerful input for the generation of quantitative estimates. Past experience (of all people involved) can aid immeasurably as estimates are developed and reviewed. Because estimation lays a foundation for all other project planning activities and project planning provides the road map for successful software engineering, we would be ill-advised to embark without it.
Steps for Estimation
Step 1: Establish Objectives
Key the estimating objectives to the needs for decision making information.

Balance the estimating accuracy objectives for the various system components of the cost estimates.

Re-examine estimating objectives as the process proceeds, and modify them where appropriate.

Step 2: Plans for Required Data and Resources
If we consider the software cost-estimation activity as a mini project, then we automatically cover this problem by generating a project plan at an early stage. The mini plan includes an early set of notes on the why, what, when, who, where, how, how much, and whereas of your estimating activity.

Step 3: Pin Down Software Requirements It is important to have a set of software specifications that are as unambiguous as possible (subject to qualifications with respect to our estimating objectives). A specification is testable to the extent that one can define a clear pass/fail test for determining whether or not the developed software will satisfy the specification. In order to be testable, specifications must be specific, unambiguous, and quantitative wherever possible.

Step 4: Work out as Much Detail as Feasible "As feasible" here means "as is consistent with our cost-estimating objectives". In general, the more detail to which we carry out our estimating activities, the more accurate our estimates will be, for three main reasons: a) the more detail we explore, the better we understand the technical aspects of the software to be developed; b) the more pieces of software we estimate, the more we get the law of large numbers working for us to reduce the variance of the estimate; c) the more we think through all the functions the software must perform, the less likely we are to miss the costs of some of the more unobtrusive components of the software.

Step 5: Use Several Independent Techniques and Sources None of the alternative techniques for software cost estimation is better than the others from all aspects, their strengths and weaknesses are complementary. It is important to use a combination of techniques, in order to avoid the weakness of any single method and to capitalize on their joint strengths.

Step 6: Compare and Iterate Estimates
The most valuable aspect of using several independent cost-estimation techniques is the opportunity to investigate why they give different estimates. An iteration of the estimates after finding why they give different estimates may converge to a more realistic estimate.

Step 7: Follow-up Once a software project is started, it is essential to gather data on its actual costs and progress and compare these to the estimates because :- Software estimating inputs are imperfect (sizing estimates, cost driver ratings). It is important to update the cost estimate with the new knowledge of the cost drivers by comparing the estimates to actual, providing a more realistic basis for some projects that do not exactly fit the estimating model. Both near-term project-management feedback and long-term model-improvement feedback of any estimates-versus-actual differences are important.

Software projects tend to be volatile: components are added, split up, rescoped, or combined in unforeseeable ways as the project progresses. The project manager needs to identify these changes and generate a more realistic update of the estimated upcoming costs. Software is an evolving field. Estimating techniques are all calibrated on previous projects, which may not have featured the future projects environment. It is important to sense differences due to these trends and incorporate them into improved project estimates and improved estimating techniques continuing to manage the project. Software estimating techniques are imperfect. For long-range improvements, we need to compare estimates to actual and use the results to improve the estimating techniques.

Software Cost Estimation Methods A number of methods have been used to estimate software costs.

Algorithimic Models
These methods provide one or more algorithms which produce a software cost estimate as a function of a number of variables which relate to some software metric (usually its size) and cost drivers.

Expert Judgement
This method involves consulting one or more experts, perhaps with the aid of an expert-consensus mechanism such as the Delphi technique

AnalogyEstimation
This method involves reasoning by analogy with one or more completed projects to relate their actual costs to an estimate of the cost of a similar new project.

Top-Down Estimation An overall cost estimate for the project is derived from global properties of the software product. The total cost is then split up among the various components.

Bottom-Up Estimation Each component of the software job is separately estimated, and the results aggregated to produce an estimate for the overall job.

Parkinson's Principle
A Parkinson principle ('Work expands to fill the available volume") is invoked to equate the cost estimate to the available resources.

Price to Win
The cost estimation developed by this method is equated to the price believed necessary to win the job. The estimated effort depends on the customer's budget and not on the software functionality.


Bottom-Up Estimation
Each component of the software job is separately estimated, and the results aggregated to produce an estimate for the overall job.





3. Describe various White Box testing techniques.
Definition:
"White box testing is a test case design method that uses the control structure of the procedural design to derive test cases".

The white box testing technique does the following,
1. Guarantee that all independent paths within a module have been exercised at least once.
2. Exercise all logical decisions on their true and false sides,
3. Execute all loops at their boundaries and within their operational bounds, and
4. Exercise internal data structures to ensure their validity
Basis Path Testing
This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing. A testing mechanism proposed by McCabe is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. Test cases which exercise basic set will execute every statement at least once.

Basis path testing is a white-box testing technique first proposed by Tom McCabe. The basis path method enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least once during testing.

Flow Graph Notation
Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed. Notation for representing control flow
Fig.
Sequence If While Until Case

Flow Graph Notations

On a flow graph:

• Arrows called edges represent flow of control

• Circles called nodes represent one or more actions.

• Areas bounded by edges and nodes are called regions.

• A predicate node is a node containing a condition

Any procedural design can be translated into a flow graph. Note that compound Boolean expressions at tests generate at least two predicate node and additional arcs.

Cyclomatic Complexity Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity of a program. When used in the context of the basis path testing method, the value computed for cyclomatic complexity defines the number of independent paths in the basis set of a program and provides us with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once. An independent path is any path through the program that introduces at least one new set of processing statements or a new condition. When stated







Figure A flowchart notation




FIG B flow graph notation



Figure C predicate node

in terms of a flow graph, an independent path must move along at least one edge that has not been traversed before the path is defined. For example, a set of independent paths for the flow graph illustrated in Figure (B) is path 1: 1-11.
Path 2: 1-2-3-4-5-10-1-11 Path 3: 1-2-3-6-8-9-10-1-11 Path 4: 1-2-3-6-7-9-10-1-11 Note that each new path introduces a new edge. The path 1-2-3-4-5-10-1-2-3-6-8-9-10-1-11 is not considered to be an independent path because it is simply a combination of already specified paths and does not traverse any new edges. Paths 1, 2, 3, and 4 constitute a basis set for the flow graph in Figure (B). That is, if tests can be designed to force execution of these paths (a basis set), every statement in the program will have been guaranteed to be executed at least once and every condition will have been executed on its true and false sides. It should be noted that the basis set is not unique. In fact, a number of different basis sets can be derived for a given procedural design. How do we know how many paths to look for? The computation of cyclomatic complexity provides the answer. Cyclomatic complexity has a foundation in graph theory and provides us with extremely useful software metric. Complexity is computed in one of the three ways:
1. The number of regions of the flow graph corresponds to the cyclomatic complexity.

2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as V(G) = E –N + 2. Where E is the number of flow graph edges, N is the number of flow graph nodes.

3. 3. Cyclomatic complexity, V(G), for a flow graph, G, is also defined as V(G) = P + 1. Where P is the number of predicate nodes contained in the flow graph G.

Referring once more to the flow graph in Figure (B), the cyclomatic complexity can be computed using each of the algorithms just noted:

1. The flow graph has four regions.
2. V (G) = 11 edges –9 nodes + 2 = 4.
3. V (G) = 3 predicate nodes + 1 = 4.

Therefore, the cyclomatic complexity of the flow graph in Figure (B) is 4.

Deriving Test Cases
The basis path testing method can be applied to a procedural design or to source code. In this section, we present basis path testing as a series of steps. The procedure Average, depicted in PDL will be used as an example (i.e. example1) to illustrate each step in the test case design method. Note that average, although an extremely simple algorithm contains compound conditions and loops. The following steps can be applied to derive the basis set:
Definition:

A test case is a series of tests used to determine whether one particular thing works properly. Often, that means trying the same operation over and over again with little in the procedure.

Steps for deriving the test cases
1. Using the design or code as a foundation, draw a corresponding flow Graph: Referring to the PDL for average program, the flow graph is created by numbering those PDL statements that will be mapped into corresponding flow graph nodes. The corresponding flow graph is in Figure 2.3
2. Determine the cyclomatic complexity of the resultant flow graph: The cyclomatic complexity for the fig 2.4 is calculated as follows, V (G) = 6 regions V (G) = 17 edges –13 nodes + 2 = 6 V (G) = 5 predicate nodes + 1 = 6


3. Determine a basis set of linearly independent paths: The value of V (G) provides the number of linearly independent paths through the program control Structure. In the case of procedure average, we expect to specify six Paths: Path 1: 1-2-10-11-13 Path 2: 1-2-10-12-13 Path 3: 1-2-3-10-11-13 Path 4: 1-2-3-4-5-8-9-2- . . . Path 5: 1-2-3-4-5-6-8-9-2- . . . Path 6: 1-2-3-4-5-6-7-8-9-2- . . . The ellipsis (. . .) following paths 4, 5, and 6 indicates that any path through the Remainder of the control structure is acceptable. It is often worthwhile to identify Predicate nodes as an aid in the derivation of test cases. In this case, nodes 2, 3, 5, 6, and 10 are predicate nodes. 4. Prepare test cases that will force execution of each path in the basis set: Data should be chosen so that conditions at the predicate nodes are appropriately set as each path is tested. Test cases that satisfy the basis set just described are Example 1: Sample Program code: PROCEDURE average; INTERFACE RETURNS average, total. Input, total.valid; INTERFACE ACCEPTS value, minimum, maximum; TYPE value [1:100] IS SCALAR ARRAY; TYPE average, total.input, total.valid; minimum, maximum, sum IS SCALAR; TYPE i IS INTEGER;
* This procedure computes the average of 100 or fewer * numbers that lie between bounding values; it also computes the sum and the total number valid. i = 1; total.input = total.valid = 0; sum = 0; DO WHILE value[i] <> –999 AND total.input < 100 ENDDO IF total.valid > 0 ENDIF END average increment total.input by 1; IF value[i] > = minimum AND value[i] < = maximum ENDIF increment i by 1; THEN average = sum / total.valid; ELSE average = –999; THEN increment total.valid by 1; sum = s sum + value[i] ELSE skip
Flow Graph notation

Path 1 test case: value(k) = valid input, where k < i for 2 ≤i ≤100 value(i) = – 999 where 2 ≤i ≤100 Expected results: Correct average based on k values and proper totals. Note: Path 1 cannot be tested stand-alone but must be tested as part of path 4, 5, and 6 tests. Path 2 test case value(1) = – 999 Expected results: Average = – 999; other totals at initial values.
Path 3 test case
Attempt to process 101 or more values.
First 100 values should be valid. Expected results: Same as test case 1. For the time being we have given 3 test cases only. Students are advised to write the remaining test cases.




Example has:

Cyclomatic Complexity of 4. Can be calculated as:

1. Number of regions of flow graph.

2. #Edges - #Nodes + #terminal vertices (usually 2)

3. #Predicate Nodes + 1

Independent Paths:

1. 1, 8

2. 1, 2, 3, 7b, 1, 8

3. 1, 2, 4, 5, 7a, 7b, 1, 8

4. 1, 2, 4, 6, 7a, 7b, 1, 8
Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage of all program statements.
The Basis Set An independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to 1. The number of regions in the flow graph. 2. V(G) = E – N + 2 where E is the number of edges and N is the number of nodes. 3. V(G) = P + 1 where P is the number of predicate nodes. Prepare test cases that will force execution of each path in the basis set Create a set of tests that force the following situations:

Control Structure Testing
Control structure testing is a group of white-box testing methods.
Loop Testing:
Loops are fundamental to many algorithms and need thorough testing. There are four different classes of loops: simple, concatenated, nested, and unstructured.

Simple Loops, where n is the maximum number of allowable passes through the loop.

o Skip loop entirely
o Only one pass through loop
o Two passes through loop
o m passes through loop where m
o (n-1), n, and (n+1) passes through the loop.

Nested Loops

o Start with inner loop. Set all other loops to minimum values.
o Conduct simple loop testing on inner loop.
o Work outwards
o Continue until all loops are tested.

Concatenated Loops

o If independent loops, use simple loop testing.
o If dependent, treat as nested loops.

Unstructured loops

o Don't test – redesign.

Advantages of White box testing
i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
ii) The other advantage of white box testing is that it helps in optimizing the code
iii) It helps in removing the extra lines of code, which can bring in hidden defects.

Disadvantages of white box testing:
i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.
ii) And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.

4. Describe the following with respect to integration testing:
A) Process Metrics
The only rational way to improve any process is to measure specific attributes of the process, develop a set of meaningful metrics based on these attributes, and then use the metrics to provide indicators that will lead to a strategy for improvement. But before we discuss software metrics and their impact on software process improvement, it is important to note that process is only one of a number of “controllable factors in improving software quality and organizational performance [PAU94].”
Referring to Figure process sits at the center of a triangle connecting three factors that have a profound influence on software quality and organizational performance.


The skill and motivation of people has been shown [BOE81] to be the single most influential factor in quality and performance. The complexity of the product can have a substantial impact on quality and team performance. The technology (i.e., the software engineering methods) that populates the process also has an impact.
In addition, the process triangle exists within a circle of environmental conditions that include the development environment (e.g., CASE tools), business conditions (e.g., deadlines, business rules), and customer characteristics (e.g., ease of communication). We measure the efficacy of a software process indirectly. That is, we derive a set of metrics based on the outcomes that can be derived from the process. Outcomes include measures of errors uncovered before release of the software, defects delivered to and reported by end-users, work products delivered (productivity), human effort expended, calendar time expended, schedule conformance and other measures.
We also derive process metrics by measuring the characteristics of specific software engineering tasks, for example, we might measure the effort and time spent Grady [GRA92] argues that there are “private and public” uses for different types of process data. Because it is natural that individual software engineers might be sensitive to the use of metrics collected on an individual basis, these data should be private to the individual and serve as an indicator for the individual only. Examples of private metrics include defect rates (by individual), defect rates (by module), and errors found during development. The “private process data” philosophy conforms well to the personal software process approach proposed by Humphrey [HUM95]. Humphrey describes the approach in the following manner: The Personal Software Process (PSP) is a structured set of process descriptions, measurements, and methods that can help engineers to improve their personal performance. It provides the forms, scripts and standards that help them estimate and plan their work. It shows them how to define processes and how to measure their quality and productivity. A fundamental PSP principle is that everyone is different and that a method that is effective for one engineer may not be suitable for another. The PSP thus helps engineers to measure and track their own work so they can find the methods that are best for them.
Humphrey recognizes that software process improvement can and should begin at the individual level. Private process data can serve as an important driver as the individual software engineer works to improve. Some process metrics are private to the software project team but public to all team members. Examples include defects reported for major software functions (that have been developed by a number of practitioners), errors found during formal technical reviews, and lines of code or function points per module and function. These data are reviewed by the team to uncover indicators that can improve team performance. Public metrics generally assimilate information that originally was private to individuals and teams. Project level defect rates (absolutely not attributed to an individual), effort, calendar times and related data are collected and evaluated in an attempt to uncover indicators that can improve organizational process performance. Software process metrics can provide significant benefit as an organization works to improve its overall level of process maturity. However, like all metrics, these can be misused, creating more problems than they solve. Grady [GRA92] suggests a software metrics etiquette that is appropriate for both managers and practitioners as they institute a process metrics program:

• Use common sense and organizational sensitivity when interpreting metrics data.

• Provide regular feedback to the individuals and teams who collect measures and metrics.

• Don’t use metrics to appraise individuals.

• Work with practitioners and teams to set clear goals and metrics that will be used to achieve them.

• Never use metrics to threaten individuals or teams.

• Metrics data that indicate a problem area should not be considered “negative.”

These data are merely an indicator for process improvement.

• Don’t obsess on a single metric to the exclusion of other important metrics. As an organization becomes more comfortable with the collection and use of process metrics, the derivation of simple indicators gives way to a more rigorous approach called statistical software process improvement (SSPI). In essence, SSPI uses software failure analysis to collect information about all errors and defects encountered as an application, system or product is developed and used. Failure analysis works in the following manner:
1. All errors and defects are categorized by origin (e.g., flaw in specification, flaw in logic, nonconformance to standards).
2. The cost to correct each error and defect is recorded.


3. The number of errors and defects in each category is counted and ranked in descending order. 4. The overall cost of errors and defects in each category is computed. 5. Resultant data are analyzed to uncover the categories that result in highest cost to the organization. 6. Plans are developed to modify the process with the intent of eliminating (or reducing the frequency of) the class of errors and defects that is most costly.
Following steps 1 and 2, a simple defect distribution can be developed (Figure ). [GRA94]. For the pie-chart noted in the figure, eight causes of

defects and their origin (indicated by shading) are shown. Grady suggests the development of a fishbone diagram [GRA92] to help in diagnosing the data represented in the frequency diagram. Referring to Figure the spine of the diagram (the central line) represents the quality factor under consideration (in this case specification defects that account for 25 percent of the total). Each of the ribs (diagonal lines) connecting to the spine indicate potential causes for the quality problem (e.g., missing requirements, ambiguous specification, incorrect requirements, changed requirements). The spine and ribs notation is then added to each of the major ribs of the diagram to expand upon the cause noted. Expansion is shown only for the incorrect cause in Figure


The collection of process metrics is the driver for the creation of the fishbone diagram. A completed fishbone diagram can be analyzed to derive indicators that will enable a software organization to modify its process to reduce the frequency of errors and defects.

B) Project Metrics
Software process metrics are used for strategic purposes. Software project measures are tactical. That is, project metrics and the indicators derived from them are used by a project manager and a software team to adapt project work flow and technical activities. The first application of project metrics on most software projects occurs during estimation. Metrics collected from past projects are used as a basis from which effort and time estimates are made for current software work. As a project proceeds, measures of effort and calendar time expended are compared to original estimates (and the project schedule). The project manager uses these data to monitor and control progress. As technical work commences, other project metrics begin to have significance. Production rates represented in terms of pages of documentation, review hours, function points, and delivered source lines are measured. In addition, errors uncovered during each software engineering task are tracked. As the software evolves from specification into design, technical metrics are collected to assess design quality and to provide indicators that will influence the approach taken to code generation and testing. The intent of project metrics is twofold. First, these metrics are used to minimize the development schedule by making the adjustments necessary to avoid delays and mitigate potential problems and risks. Second, project metrics are used to assess product quality on an ongoing basis and, when necessary, modify the technical approach to improve quality. As quality improves, defects are minimized, and as the defect count goes down, the amount of rework required during the project is also reduced. This leads to a reduction in overall project cost.

Inputs – measures of the resources (e.g., people, environment) required to do the work.

Outputs – measures of the deliverables or work products created during the software engineering process.

Results – measures that indicate the effectiveness of the deliverables.

In actuality, this model can be applied to both process and project. In the project context, the model can be applied recursively as each framework activity occurs. Therefore the output from one activity becomes input to the next. Results of metrics can be used to provide an indication of the usefulness of work products as they flow from one frame work activity to the next.

C) Software Measurement
Measurements can be either direct or indirect. Direct measures are taken from a feature of an item (e.g. length). Indirect measures associate a measure to a feature of the object being measured (e.g. quality is based upon counting rejects). Direct measures in a product include lines of code (LOC), execution speed, memory size, and defects reported. Indirect measures include functionality, quality, complexity, efficiency, reliability, and maintainability. Direct measures are generally easier to collect than indirect measures. Size-oriented metrics are used to collect direct measures of software engineering output and quality. Function-oriented metrics provide indirect measures.
Size-Oriented Metrics
Size-oriented metrics are a direct measure of software and the process by which it was developed. These metrics can include effort (time), money spent, KLOC (1000s of lines of code), pages of documentation created, errors, and people on the project. From this data some simple size-oriented metrics can be generated. Productivity = KLOC / person-month Quality = defects / KLOC Cost = Cost / KLOC Documentation = pages of documentation / LOC Size-oriented metrics are not universally accepted. The use of LOC as a key measure is the center of the conflict. Proponents of the LOC measure claim:

It is an artifact of all software engineering processes which can easily be counted

Many existing metrics exist which use LOC as an input

A large body of literature and data exist which is predicated on LOC

Opponents of the LOC measure claim:

That it is language dependent

Well designed short programs are penalized

They do not work well with non-procedural languages

Their use in planning is difficult because the planner must estimate LOC before the design is completed

Function-Oriented Metrics
Function-oriented metrics are indirect measures of software which focus on functionality and utility. The first function-oriented metrics was proposed by Albrecht who suggested a productivity measurement approach called the function point method. Function points (FPs) are derived from countable measures and assessments of software complexity.

Five characteristics are used to calculate function points. These values are number of user inputs, number of user outputs, number of user inquiries, (on-line inputs), number of files, number of external interfaces (machine readable interfaces - tape, disk). Once the five information domain characteristics have been determined, they are weighted using the following table,


The weighted values are summed and function points are calculated using, FP = count-total * (0.65 + 0.01 * SUM(Fi)) where Fi are complexity adjustment values.
Once calculated, FPs may be used in place of LOC as a measure of productivity, quality, cost, documentation, and other attributes. Function points were originally designed to be applied to business information systems. Extensions have been suggested called feature points which may enable this measure to be applied to other software engineering applications. Feature points accommodate applications in which the algorithmic complexity is high such as real-time, process control and embedded software applications. Feature points are calculated as were function points with the additions of an additional software characteristic, algorithm. An algorithm is a bounded computational problem such as inverting a matrix, decoding a bit string or handling an interrupt. Feature points are calculated using:


The sum of these values is used in the function point calculation to calculate the feature points.

Metric Comparisons
The relationship between LOC and FP depend on the programming language being used. Rough estimates for the number of lines of code needed for one function point reveal the following.















August 2010
Master of Computer Application (MCA) – Semester 5
MC0084 – Software Project Management &
Quality Assurance – 4 Credits
(Book ID: B0958 & B0959)
Assignment Set – 2 (60 Marks)

Answer all Questions Each question carries FIFTEEN Marks

Book ID: B0958
1. Describe the following concepts with the help of relevant examples:
A) Effective Risk Management B) Risk Categories
C) Aids for Risk Identification

2. Describe the following with respect to Team Development and Conflict Management:
A) Centralized-Control Team Organization
B) Decentralized-Control Team Organization
C) Mixed-Control Team Organization


Book ID: B0959
3. Explain the following Functional Specifications with suitable examples:
A) Black-Box Specification B) State-Box Specification
C) Clear-Box Specification

4. Explain various Software quality standards.

Popular Posts

Recent Posts