Monday, 9 January 2012

Defect Management Process

Defect Management Process
A defect is a variance from expectations. To manage defects properly requires a process that prevents, discovers, tracks, resolves, and improves processes to reduce future defect occurrences.

The general principles of a Defect Management Process are as follows:
The primary goal is to prevent defects. Where this is not possible or practical, the goals are to find the defect as quickly as possible and to minimize the impact of the defect.
-   The defect management process, like the entire software development process, should be risk driven, i.e., strategies, priorities, and resources should be based on an assessment of the risk and the degree to which the expected impact of a risk can be reduced.
-   Defect measurement should be integrated into the development process. Information on defects should be captured at the source as a natural by product of doing the job. It should not be done after the fact by people unrelated to the project or system.
-   As much as possible, the capture and analysis of the information should be automated. The QA analyst should look for trends and perform a root cause analysis to identify  special and common cause problems.
-   Defect information should be used to improve the process. As imperfect or flawed processes cause most defects, processes may need to be altered to prevent defects.

Defect Reporting

Recording the defects identified at each stage of the test process is an integral part of a successful life cycle testing approach. The purpose of this activity is to create a complete record of the discrepancies identified during testing. The information captured is used in multiple ways throughout the project, and forms the basis for quality measurement.

A defect can be defined in one of two ways. From the producers viewpoint, a defect is a deviation from specifications, whether missing, wrong, or extra. From the Customers viewpoint, a defect is anything that causes customer dissatisfaction, whether in the requirements or not. It is critical that
defects identified at each stage of the life cycle be tracked to resolution.

Defects are recorded for four major purposes:

-   To ensure the defect is corrected
-   To report status of the application
-   To gather statistics used to develop defect expectations in future applications
-   To improve the software development process
Most project teams use some type of tool to support the defect tracking process. This tool could be as simple as a white board or a table created and maintained in a word processor or one of the more robust tools available today on the market.

Tools marketed for this purpose usually come with a number of customizable fields for tracking project specific data in addition to the basics. They also provide advanced features such as standard and adhoc reporting, email notification to developers or testers when a problem is assigned to them, and graphing capabilities.

At a minimum, the tool selected should support the recording and communication of all significant information about a defect.

For example, a defect log could include:

-   Defect ID number
-   Descriptive defect name and type
-   Source of defect  test case or other source that found the defect
-   Method SDLC phase of creation
-   Phases SDLC phase of detection
-   Component or program that had the defect
-   Defect severity
-   Defect priority
-   Defect status (e.g., open, fixed, closed, user error, design, and so on)  more robust
tools provide a status history for the defect
-   Date and time tracking for either the most recent status change, or for each change
in the status history
-   Detailed description, including the steps necessary to reproduce the defect
-   Screen prints, logs, etc., that will aid the developer in resolution process
-   Stage of origination
-   Persons assigned to research and correct the  defect.
Severity versus Priority
Based on predefined severity descriptions, the test team should assign the severity of a defect objectively. For example a

severity one-    defect may be defined as one that causes data corruption, a system crash, security violations, etc. Severity levels should be defined at the start of the project
so that they are consistently assigned and understood by the team. This foresight can help test teams avoid the common disagreements with development teams about the criticality of a defect.
In large projects, it may also be necessary to assign a priority to the defect, which determines the order in which defects should be fixed. The priority assigned to a defect is usually more subjective as it may be based on input from users regarding which defects are most important, resources
available, risk, etc.

A Sample Defect Management Process
The steps below describe a sample defect tracking process. Depending on the size of the project or project team, this process

may be substantially more complex.

1. Execute test and log any discrepancies.

The tester executes the test and compares the actual results to the documented expected results. If a discrepancy exists, the  discrepancy is logged as a defect-    with a status of open.-    Supplementary documentation, such as screen prints or program traces, is attached if available.

2. Determine if discrepancy is a defect.

The Test Manager or tester reviews the defect log with an appropriate member of the development team to determine if the discrepancy is truly a defect, and is repeatable. If it is not a defect, or repeatable, the log should be closed with an explanatory comment.

3. Assign defect to developer.

If a defect exists it is assigned to a developer for correction. This may be handled automatically by the tool, or may be determined as a result of the discussion in step 2.

4. Defect resolution process.

When the developer has acknowledged the defect is valid, the resolution process begins. The four steps of the resolution process are:

-   Prioritize the correction.
Three recommended prioritization levels are: critical, major-   , and minor-   .
Critical-    means there is a serious impact on the organization
s business operation or on further testing. Major-    causes an output of the software to be incorrect or stops or impedes further testing. Minor-    means something is wrong, but it does not directly affect the user of the system or further testing, such as a documentation error or cosmetic GUI error.

The purpose of this step is to initiate any immediate action that may be required after answering the questions: Is this a new or previously reported defect? What priority should be given to correcting this defect? Should steps be taken to minimize the impact of the defect before the correction, such as notifying users, finding a workaround?

-   Schedule the correction.
Based on the priority of the defect, the correction should be scheduled. All defects are not created equal from the perspective of how quickly they need to be corrected, although they may all be equal from a defectprevention perspective.
Some organizations actually treat lower priority defects as changes.

-   Correct the defect.
The developer corrects the defect, and upon completion, updates the log with a description of the correction and changes the status to Corrected-    or Retest-   . The tester then verifies that the defect has been removed from the system.
Additional regression testing is performed as needed based on the severity and impact of the correction applied. In addition, test data, checklists, etc., should be reviewed and perhaps enhanced, so that in the future this defect will be caught earlier. If the retest results match the expected results, the tester updates the defect status to closed.-    If the problem remains, the tester changes the status back to Open-    and this step is repeated until closure.

-   Report the resolution.
Once the defect has been corrected and the correction verified, appropriate
developers, users, etc., need to be notified that the defect has been corrected, the nature of the correction, when the correction will be released, and how the correction will be released.

As in many aspects of defect management, this is an area where an automated process would help. Most defect management tools capture information on who found and reported the problem and therefore provide an initial list of who needs to be notified.

Computer forums and electronic mail can help notify users of widely distributed software.

Test Reports are issued periodically throughout the testing process to communicate the test status to the rest of the team and management. These reports usually include a summary of the open defects, by severity or priority. Additional graphs and metrics can also be provided to further describe the
status of the application.  Using defects to improve processes is not done by many organizations today, but it offers one of the greatest areas of payback. NASA emphasizes the point that any defect represents a weakness in the process. Seemingly unimportant defects are, from a process perspective, no different from critical defects. It is only the developers good luck that prevents a defect
from causing a major failure. Even minor defects, therefore, represent an opportunity to learn how to improve the process and prevent potentially major failures. While the defect itself may not be a big deal, the fact that there was a defect is a big deal.

Based on the research team findings, this activity should include the following:

-   Go back to the process that originated the defect to understand what caused the defect
-   Go back to the verification and validation process, which should have caught the defect earlier. Not only can valuable insight be gained as to how to strengthen the review process, these steps make everyone involved in these activities take them more seriously. This human factor dimension
alone, according to some of the people the research team interviewed, can have a very large impact on the effectiveness of the review process.

No comments:

Post a Comment