Wednesday 21 December 2011

Interview Questions for Manual Testing


 What is installation testing?

A: Installation testing is the testing of a full, partial, or upgrade install/uninstall process. The installation test is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application's System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. Following installation testing, a sanity test is performed when necessary.

 What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

 What is recovery/error testing?


A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

 What is compatibility testing?

A: Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.

 What is comparison testing?

A: Comparison testing is testing that compares software weaknesses and strengths to those of competitors' products.

 What is acceptance testing?

A: Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager; however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.


 What is alpha testing?

A: Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.

 What is beta testing?

A: Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.

 What testing roles are standard on most testing projects?

A: Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.

 What is a Test/QA Team Lead?

A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team.

 What is the role of a Test Engineer?

A: A Test Engineer is an engineer who specializes in testing. Test engineers create test cases, procedures, and scripts and generate data. They execute test procedures and scripts, analyze standards of measurements, and evaluate results of system/integration/regression testing. They also...
v  Speed up the work of your development staff;
v  Reduce your risk of legal liability;
v  Give you the evidence that your software is correct and operates properly;
v  Improve problem tracking and reporting;
v  Maximize the value of your software;
v  Maximize the value of the devices that use it;

Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;
Help the work of your development staff, so the development team can devote its time to build up your product;
Promote continual improvement;
Provide documentation required by FDA, FAA, other regulatory agencies and your customers;
Save money by discovering defects 'early' in the design process, before failures occur in production, or in the field;
Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.

 What is the role of a Test Build Manager?

A: Test Build Managers deliver current software versions to the test environment, install the application's software and apply software patches, to the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.


 What is the role of a System Administrator?

A: Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application's software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.

 What is the role of a Database Administrator?

A: Database Administrators, Test Build Managers, and System Administrators deliver current software versions to the test environment, install the application's software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator. 
  
 What is the role of a Technical Analyst?

A: Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.

 What is the role of a Test Configuration Manager?

A: Test Configuration Managers maintain test environments, scripts, and software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.



 What is a test schedule?

A: The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.

 What is software-testing methodology?  

 One software testing methodology is a three step process of...

v  Creating a test strategy;
v  Creating a test plan/design;
v  Executing tests. 

This methodology can be used and molded to your organization's needs. Rob Davis believes that using this methodology is important in the development and ongoing maintenance of his customers' applications.

 What is the general testing process?

A: The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.

 How do you create a test strategy?

A: The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.

Inputs for this process:
v  A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
v  A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
v  Testing methodology. This is based on known standards.
v  Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
v  Requirements that the system cannot provide, e.g. system limitations.
v  Outputs for this process:
v  An approved and signed off test strategy document, test plan, including test cases.
v  Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

 How do you create a test plan/design?

A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results.

v  Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.
v  Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.
v  It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.
v  Test scenarios are executed through the use of test procedures or scripts.
v  Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
v  Test procedures or scripts include the specific data that will be used for testing the process or transaction.
v  Test procedures or scripts may cover multiple test scenarios.
v  Test scripts are mapped back to the requirements and trace ability matrices are used to ensure each test is within scope.
v  Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.
v  Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
v  A pre-test meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.

Inputs for this process:

v  Approved Test Strategy Document.
v  Test tools, or automated test tools, if applicable.
v  Previously developed scripts, if applicable.
v  Test documentation problems uncovered as a result of testing.
v  A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.

Outputs for this process:

v  Approved documents of test scenarios, test cases, test conditions and test data.
v  Reports of software design issues, given to software developers for correction.  

 How do you execute tests?

A: Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities. The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing. A pass/fail criterion is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool.

Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA (SWQA) Manager and/or Test Team Lead.
After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager's formal acceptance.
The test team reviews test document problems identified during testing, and update documents where appropriate.

Inputs for this process:
v  Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
v  Test tools, including automated test tools, if applicable.
v  Developed scripts.
v  Changes to the design, i.e. Change Request Documents.



Test data.

v  Availability of the test team and project team.
v  General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.
v  A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.
v  Test Readiness Document.
v  Document Updates. 

Outputs for this process:

v  Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.
v  Changes to the code, also known as test fixes.
v  Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems.
v  Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.
v  Formal record of test incidents, usually part of problem tracking.
v  Base-lined package, also known as tested source and object code, ready for migration to the next level.
 
What is the difference between testing and debugging?

Big difference is that debugging is conducted by a programmer and the programmer fixes the errors during debugging phase. Tester never fixes the errors, but rather find them and return to programmer.

What is the difference between structural and functional testing?

Structural is a "white box" testing and based on the algorithm or code. Functional testing is a "black box" (behavioral) testing where the tester verifies the functional specification.

What is a bug? What types of bugs do you know?

Bug is an error during execution of the program. There are two types of bugs: syntax and logical.

What is the difference between testing and quality assurance (QA)?

 The goals of both are different: The goal of testing is to find the errors. The goal of QA is to prevent the errors in the program. 



 Does every software project need testers?

While all projects will benefit from testing, some projects may not require independent test staff to succeed. Which projects may not need independent test staff? The answer depends on the size and context of the project, the risks, the development methodology, the skill and experience of the developers, and other factors. For instance, if the project is a short-term, small, low risk project, with highly experienced programmers utilizing thorough unit testing or test-first development, then test engineers may not be required for the project to succeed

In some cases an IT organization may be too small or new to have a testing staff even if the situation calls for it. In these circumstances it may be appropriate to instead use contractors or outsourcing, or adjust the project management and development approach (by switching to more senior developers and agile test-first development, for example). Inexperienced managers sometimes gamble on the success of a project by skipping thorough testing or having programmers do post-development functional testing of their own work, a decidedly high risk gamble.
For non-trivial-size projects or projects with non-trivial risks, a testing staff is usually necessary. As in any business, the use of personnel with specialized skills enhances an organization's ability to be successful in large, complex, or difficult tasks. It allows for both a) deeper and stronger skills and b) the contribution of differing perspectives. For example, programmers typically have the perspective of 'what are the technical issues in making this functionality work?'. A test engineer typically has the perspective of 'what might go wrong with this functionality, and how can we ensure it meets expectations?'. Technical people who can be highly effective in approaching tasks from both of those perspectives are rare, which is why, sooner or later, organizations bring in test specialists

How new Software QA processes are introduced in an existing organization?

A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.

For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.
The most value for effort will often be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation, or in 'agile'-type environments extensive continuous coordination with end-users, (b) design inspections and code inspections, and (c) post-mortems/retrospectives.
Other possibilities include incremental self-managed team approaches such as 'Kaizen' methods of continuous process improvement, the Deming-Shewhart Plan-Do-Check-Act cycle, and others.

 What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.

v  Level 1 - characterized by chaos, periodic panics, and heroic
                 Efforts required by individuals to successfully
                 Complete projects.  Few if any processes in place;
                 Successes may not be repeatable.

v  Level 2 - software project tracking, requirements management,
                 realistic planning, and configuration management
                 processes are in place; successful practices can
                 Be repeated.

v  Level 3 - standard software development and maintenance processes
                 are integrated throughout an organization; a Software
                 Engineering Process Group is is in place to oversee
                 software processes, and training programs are used to
                 ensure understanding and compliance.

v  Level 4 - metrics are used to track productivity, processes,
                 and products.  Project performance is predictable,
                 and quality is consistently high.

v  Level 5 - the focus is on continuous process improvement. The
                 impact of new processes and technologies can be
                 predicted and effectively implemented when required.





What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available The following are items to consider in the tracking process:

v  Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
v  Bug identifier (number, ID, etc.)
v  Current bug status (e.g., 'Released for Retest', 'New', etc.)
v  The application name or identifier and version
v  The function, module, feature, object, screen, etc. where the bug occurred
v  Environment specifics, system, platform, relevant hardware specifics
v  Test case name/number/identifier
v  One-line bug description
v  Full bug description
v  Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool
v  Names and/or descriptions of file/data/messages/etc. used in test
v  File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
v  Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
v  Was the bug reproducible?
v  Tester name
v  Test date
v  Bug reporting date
v  Name of developer/group/organization the problem is assigned to
v  Description of problem cause
v  Description of fix
v  Code section/file/module/class/method that was fixed
v  Date of fix
v  Application version that contains the fix
v  Tester responsible for retest
v  Retest date
v  Retest results
v  Regression testing requirements
v  Tester responsible for regression tests
v  Regression testing results 

A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers

 How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
v  Deadlines (release deadlines, testing deadlines, etc.)
v  Test cases completed with certain percentage passed
v  Test budget depleted
v  Coverage of code/functionality/requirements reaches a specified point
v  Bug rate falls below a certain level
v  Beta or alpha testing period ends

 What if there isn't enough time for thorough testing?

Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:

v  Which functionality is most important to the project's intended purpose?
v  Which functionality is most visible to the user?
v  Which functionality has the largest safety impact?
v  Which functionality has the largest financial impact on users?
v  Which aspects of the application are most important to the customer?
v  Which aspects of the application can be tested early in the development cycle?
v  Which parts of the code are most complex, and thus most subject to errors?
v  Which parts of the application were developed in rush or panic mode?
v  Which aspects of similar/related previous projects caused problems?
v  Which aspects of similar/related previous projects had large maintenance expenses?
v  Which parts of the requirements and design are unclear or poorly thought out?
v  What do the developers think are the highest-risk aspects of the application?
v  What kinds of problems would cause the worst publicity?
v  What kinds of problems would cause the most customer service complaints?

v  What kinds of tests could easily cover multiple functionalities?
v  Which tests will have the best high-risk-coverage to time-required ratio?


 What if the project isn't big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.

 How does a client/server environment affect testing?

Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers, especially in multi-tier systems. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing 

 How the World Wide Web sites are tested?

Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, JavaScript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:

What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
Will down time for server and content maintenance/upgrades be allowed? How much?
What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
How will internal and external links be validated and updated? How often?
Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variability’s, and real-world Internet 'traffic congestion' problems to be accounted for in testing?
How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
How are cgi programs, applets, JavaScript’s, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
Some sources of site security information include the Usenet newsgroup 'comp.security.announce' and links concerning web site security in the 'Other Resources' section.
Some usability guidelines to consider - these are subjective and may or may not apply to a given situation (Note: more information on usability testing issues can be found in articles about web site usability in the 'Other Resources' section):
Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.
The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.
Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.
All pages should have links external to the page; there should be no dead-end pages.
The page owner, revision date, and a link to a contact person or organization should be included on each page. 




 How is testing affected by object-oriented designs?

Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well-designed this can simplify test design.

 What is Extreme Programming and what's it got to do with testing?

Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. It was created by Kent Beck who described the approach in his book 'Extreme Programming Explained' (See the Softwareqatest.com Books page.). Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first - before writing the application code. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help developed scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected. For more info on XP and other 'agile' software development approaches (Scrum, Crystal, etc.

 Why is it often hard for organizations to get serious about quality assurance?

Solving problems is a high-visibility process; preventing problems is low-visibility.  This is a problem in any business, but it's a particularly difficult problem in the software industry. Software quality problems are often not as readily apparent as they might be in the case of an industry with more physical products, such as auto manufacturing or home construction.
Additionally, many organizations are able to determine who is skilled at fixing problems, and then reward such people. However, determining who has a talent for preventing problems in the first place, and figuring out how to incentives such behavior, is a significant challenge.


 Who is responsible for risk management?

Risk management means the actions taken to avoid things going wrong on a software development project, things that might negatively impact the quality, timeliness, or cost of a project. This is, of course, a shared responsibility among everyone involved in a project. However, there needs to be a 'buck stops here' person who can consider the relevant tradeoffs when decisions are required, and who can ensure that everyone is handling their risk management responsibilities.
It is not unusual for the term 'risk management' to never come up at all in a software organization or project. If it does come up, it's often assumed to be the responsibility of QA or test personnel. Or there may be a 'risks' or 'issues' section of a project, QA, or test plan, and it's assumed that this means that risk management has taken place.
The issues here are similar to "Who should decide when software is ready to be released?" It's generally NOT a good idea for a test lead, test manager, or QA manager to be the 'buck stops here' person for risk management. Typically QA/Test personnel or managers are not managers of developers, analysts, designers and many other project personnel, and so it would be difficult for them to ensure that everyone on a project is handling their risk management responsibilities. Additionally, knowledge of all the considerations that go into risk management mitigation and tradeoff decisions is rarely the province of QA/Test personnel or managers. Based on these factors, the project manager is usually the most appropriate 'buck stops here' risk management person. QA/Test personnel can, however, provide input to the project manager. Such input could include analysis of quality-related risks, risk monitoring, process adherence reporting, defect reporting, and other information.

 Who should decide when software is ready to be released?

In many projects this depends on the release criteria for the software. Such criteria are often in turn based on the decision to end testing, discussed in FAQ #2 item "How can it be known when to stop testing?" Unfortunately, for any but the simplest software projects, it is nearly impossible to adequately specify useful criteria without a significant amount of assumptions and subjectivity. For example, if the release criteria is based on passing a certain set of tests, there is likely an assumption that the tests have adequately addressed all appropriate software risks. For most software projects, this would of course be impossible without enormous expense, so this assumption would be a large leap of faith. Additionally, since most software projects involve a balance of quality, timeliness, and cost, testing alone cannot address how to balance all three of these competing factors when release decisions are needed.
A typical approach is for a lead tester or QA or Test manager to be the release decision maker. This again involves significant assumptions - such as an assumption that the test manager understands the spectrum of considerations that are important in determining whether software quality is 'sufficient' for release, or the assumption that quality does not have to be balanced with timeliness and cost. In many organizations, 'sufficient quality' is not well defined, is extremely subjective, may have never been usefully discussed, or may vary from project to project or even from day to day.
Release criteria considerations can include deadlines, sales goals, business/market/competitive considerations, business segment quality norms, legal requirements, technical and programming considerations, end-user expectations, internal budgets, impacts on other organization projects or goals, and a variety of other factors. Knowledge of all these factors is often shared among a number of personnel in a large organization, such as the project manager, director, customer service manager, technical lead or manager, marketing manager, QA manager, etc. In smaller organizations or projects it may be appropriate for one person to be knowledgeable in all these areas, but that person is typically a project manager, not a test lead or QA manager.
For these reasons, it's generally not a good idea for a test lead, test manager, or QA manager to decide when software is ready to be released. Their responsibility should be to provide input to the appropriate person or group that makes a release decision. For small organizations and projects that person could be a product manager, a project manager, or similar manager. For larger organizations and projects, release decisions might be made by a committee of personnel with sufficient collective knowledge of the relevant considerations.

 What can be done if requirements are changing continuously? 

This is a common problem for organizations where there are expectations that requirements can be pre-determined and remain stable. If these expectations are reasonable, here are some approaches:
Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.
It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.
If the code is well commented and well documented this makes changes easier for the developers.
Use some type of rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.
The project's initial schedule should allow for some extra time commensurate with the possibility of changes.
Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.
Negotiate to allow only easily implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.
Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.
Balance the effort put into setting up automated testing with the expected effort required to refractor them to deal with changes.
Try to design some flexibility into automated test scripts.
Focus initial automated testing on application aspects that are most likely to remain unchanged.
Devote appropriate effort to risk analysis of changes to minimize regression-testing needs.
Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)
Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails).
If this is a continuing problem, and the expectation that requirements can be pre-determined and remain stable is NOT reasonable, it may be a good idea to figure out why the expectations are not aligned with reality, and to reflect an organization's or project's software development process to take this into account. It may be appropriate to consider agile development approaches.

 What if the application has functionality that wasn't in the requirements?

It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it could indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. (If the functionality is minor and low risk then no action may be necessary.) If not removed, information will be needed to determine risks and to determine any added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality.
This problem is a standard aspect of projects that include COTS (Commercial Off-The-Shelf) software or modified COTS software. The COTS part of the project will typically have a large amount of functionality that is not included in project requirements, or may be simply undetermined. Depending on the situation, it may be appropriate to perform in-depth analysis of the COTS software and work closely with the end user to determine which pre-existing COTS functionality is important and which functionality may interact with or be affected by the non-COTS aspects of the project. A significant regression testing effort may be needed (again, depending on the situation), and automated regression testing may be useful.

 How can Software QA processes be implemented without reducing productivity?

By implementing QA processes slowly over time, using consensus to reach agreement on processes, focusing on processes that align tightly with organizational goals, and adjusting/experimenting/reflecting as an organization matures, productivity can be improved instead of stifled. Problem prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, avoid a 'Process Police' mentality, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the QA process. However, no one - especially talented technical types - likes rules or bureaucracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning, reviews, and inspections will be needed, but less time will be required for late-night bug-fixing and handling of irate customers.
Other possibilities include incremental self-managed team approaches such as 'Kaizen' methods of continuous process improvement, the Deming-Shewhart Plan-Do-Check-Act cycle, and others.

 What if an organization is growing so fast that fixed QA processes are impossible?

This is a common problem in the software industry, especially in new technology areas. There is generally no easy solution in this situation. One approach is:
Hire good people
Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer
Everyone in the organization should be clear on what 'quality' means to the customer
Depending on the growth rate, it is possible that incremental self-managed team approaches may be applicable, such as 'Kaizen' methods of continuous process improvement, or the Deming-Shewhart Plan-Do-Check-Act cycle, and others  

 What's the best approach to software test estimation?

There is no simple answer for this. The 'best approach' is highly dependent on the particular organization and project and the experience of the personnel involved.
For example, given two software projects of similar complexity and size, the appropriate test effort for one project might be very large if it was for life-critical medical equipment software, but might be




much smaller for the other project if it was for a low-cost computer game. A test estimation approach that only considered size and complexity might be appropriate for one project but not for the other.







Following are some approaches to consider.

v  Implicit Risk Context Approach:
A typical approach to test estimation is for a project manager or QA manager to implicitly use risk context, in combination with past personal experiences in the organization, to choose a level of resources to allocate to testing. In many organizations, the 'risk context' is assumed to be similar from one project to the next, so there is no explicit consideration of risk context. (Risk context might include factors such as the organization's typical software quality levels, the software's intended use, the experience level of developers and testers, etc.) This is essentially an intuitive guess based on experience.

v  Metrics-Based Approach:
A useful approach is to track past experience of an organization's various projects and the associated test effort that worked well for projects. Once there is a set of data covering characteristics for a reasonable number of projects, then this 'past experience' information can be used for future test project planning. (Determining and collecting useful project metrics over time can be an extremely difficult task.) For each particular new project, the 'expected' required test time can be adjusted based on whatever metrics or other information is available, such as function point count, number of external system interfaces, unit testing done by developers, risk levels of the project, etc. In the end, this is essentially 'judgement based on documented experience', and is not easy to do successfully.

v  Test Work Breakdown Approach:
Another common approach is to decompose the expected testing tasks into a collection of small tasks for which estimates can, at least in theory, be made with reasonable accuracy. This of course assumes that an accurate and predictable breakdown of testing tasks and their estimated effort is feasible. In many large projects, this is not the case. For example, if a large number of bugs are being found in a project, this will add to the time required for testing, retesting, bug analysis and reporting. It will also add to the time required for development, and if development schedules and efforts do not go as planned, this will further impact testing.

v  Iterative Approach:
In this approach for large test efforts, an initial rough testing estimate is made. Once testing begins, a more refined estimate is made after a small percentage (eg, 1%) of the first estimate's work is done. At this point testers have obtained additional test project knowledge and a better understanding of issues, general software quality, and risk. Test plans and schedules can be refactored if necessary and a new estimate provided. Then a yet-more-refined estimate is made after a somewhat larger percentage (eg, 2%) of the new work estimate is done. Repeat the cycle as necessary/appropriate.

v  Percentage-of-Development Approach:
Some organizations utilize a quick estimation method for testing based on the estimated programming effort. For example, if a project is estimated to require 1000 hours of programming effort, and the organization normally finds that a 40% ratio for testing is appropriate, then an estimate of 400 hours for testing would be used. This approach may or may not be useful depending on the project-to-project variations in risk, personnel, types of applications, levels of complexity, etc.

Successful test estimation is a challenge for most organizations, since few can accurately estimate software project development efforts, much less the testing effort of a project. It is also difficult to attempt testing estimates without first having detailed information about a project, including detailed requirements, the organization's experience with similar projects in the past, and an understanding of what should be included in a 'testing' estimation for a project (functional testing? unit testing? reviews? inspections? load testing? security testing?


With agile software development approaches, test effort estimations may be unnecessary if pure test-driven development is utilized. In general, agile-based projects by their nature will not be heavily dependent on large testing efforts, since they emphasize the construction of releasable software in very short iteration cycles. Therefore test effort estimates may not be as difficult and the impact of inaccurate estimates will be minimized

 What's the best way to choose a test automation tool?

It's easy to get caught up in enthusiasm for the 'silver bullet' of test automation, where the dream is that a single mouse click can initialize thorough unattended testing of an entire software application, bugs will be automatically reported, and easy-to-understand summary reports will be waiting in the manager's in-box in the morning.

Although that may in fact be possible in some situations, it is not the way things generally play out.
In manual testing, the test engineer exercises software functionality to determine if the software is behaving in an expected way. This means that the tester must be able to judge what the expected outcome of a test should be, such as expected data outputs, screen messages, changes in the appearance of a User Interface, XML files, database changes, etc. In an automated test, the computer does not have human-like 'judgment' capabilities to determine whether or not a test outcome was correct. This means there must be a mechanism by which the computer can do an automatic comparison between actual and expected results for every automated test scenario and unambiguously make a pass or fail determination. This factor may require a significant change in the entire approach to testing, since in manual testing a human is involved and can:

v  Make mental adjustments to expected test results based on variations in the pre-test state of the software system
v  often make on-the-fly adjustments, if needed, to data used in the test
v  Make pass/fail judgments about results of each test
v  Make quick judgments and adjustments for changes to requirements.
v  Make a wide variety of other types of judgments and adjustments as needed. 
v  For those new to test automation, it might be a good idea to do some reading or training first. There are a variety of ways to go about doing this; some example approaches are:
v  Read through information on the web about test automation such as general information available on some test tool vendor sites or some of the automated testing articles listed in the Softwareqatest.com Other Resources section.
v  Read some books on test automation such as those listed in the Software QA and Testing Resource Center Bookstore
v  Obtain some test tool trial versions or low cost or open source test tools and experiment with them
v  Attend software testing conferences or training courses related to test automation
v  As in anything else, proper planning and analysis are critical to success in choosing and utilizing an automated test tool. Choosing a test tool just for the purpose of 'automating testing' is not useful; useful purposes might include: testing more thoroughly, testing in ways that were not previously feasible via manual methods (such as load testing), testing faster, or reducing excessively tedious manual testing. Automated testing rarely enables savings in the cost of testing, although it may result in software lifecycle savings (or increased sales) just as with any other quality-related initiative. With the proper background and understanding of test automation, the following considerations can be helpful in choosing a test tool (automated testing will not necessarily resolve them, they are only considerations for automation potential):
v  Analyze the current non-automated testing situation to determine where testing is not being done or does not appear to be sufficient
v  Where is current testing excessively time-consuming?
v  Where is current testing excessively tedious?
v  What kinds of problems are repeatedly missed with current testing?
v  What testing procedures are carried out repeatedly (such as regression testing or security testing)?
v  What testing procedures are not being carried out repeatedly but should be?
v  What test tracking and management processes can be implemented or made more effective through the use of an automated test tool?
v  Taking into account the testing needs determined by analysis of these considerations and other appropriate factors, the types of desired test tools can be determined. For each type of test tool (such as functional test tool, load test tool, etc.) the choices can be further narrowed based on the characteristics of the software application. The relevant characteristics will depend, of course, on the situation and the type of test tool and other factors. Such characteristics could include the operating system, GUI components, development languages, web server type, etc. Other factors affecting a choice could include experience level and capabilities of test personnel, advantages/disadvantages in developing a custom automated test tool, tool costs, tool quality and ease of use, usefulness of the tool on other projects, etc. Once a short list of potential test tools is selected, several can be utilized on a trial basis for a final determination.

 How can it be determined if a test environment is appropriate?

This is a difficult question in that it typically involves tradeoffs between 'better' test environments and cost. The ultimate situation would be a collection of test environments that mimic exactly all possible hardware, software, network, data, and usage characteristics of the expected live environments in which the software will be used. For many software applications, this would involve a nearly infinite number of variations, and would clearly be impossible. And for new software applications, it may also be impossible to predict all the variations in environments in which the application will run. For very large, complex systems, duplication of a 'live' type of environment may be prohibitively expensive. In reality judgments must be made as to which characteristics of software application environment are important, and test environments can be selected on that basis after taking into account time, budget, and logistical constraints. Such judgments are preferably made by those who have the most appropriate technical knowledge and experience, along with an understanding of risks and constraints.

For smaller or low risk projects, an informal approach is common, but for larger or higher risk projects (in terms of money, property, or lives) a more formalized process involving multiple personnel and significant effort and expense may be appropriate.

In some situations it may be possible to mitigate the need for maintenance of large numbers of varied test environments. One approach might be to coordinate internal testing with beta testing efforts. Another possible mitigation approach is to provide built-in automated tests that run automatically upon installation of the application by end-users. These tests might then automatically report back information, via the Internet, about the application environment and problems encountered.



 What is GUI testing and how test cases are designed for it?

GUI testing is the process of testing a graphical user interface to ensure it meets its written specifications. This is normally done through the use of a variety of test cases.
To generate a ‘good’ set of test cases, the test designer must be certain that their suite covers all the functionality of the system and also has to be sure that the suite fully exercises the GUI itself. The difficulty in accomplishing this task is twofold: one has to deal with domain size and then one has to deal with sequences. In addition, the tester faces more difficulty when they have to do regression testing..
The second problem is the sequencing problem. Some functionality of the system may only be accomplishable by following some complex sequence of GUI events. For example, to open a file a user may have to click on the File Menu and then select the Open operation, and then use a dialog box to specify the file name, and then focus the application on the newly opened window. Obviously, increasing the number of possible operations increases the sequencing problem exponentially. This can become a serious issue when the tester is creating test cases manually.
Regression testing also becomes a problem with GUI’s as well. This is because the GUI may change significantly across versions of the application, even though the underlying application may not. A test designed to follow a certain path through the GUI may not be able to follow that path since a button, menu item or dialog may not be where it used to be.
These issues have driven the GUI testing problem domain towards automation. Many different techniques have been proposed to automatically generate test suites that are complete and that simulate user behavior.


 What is the role of documentation in QA?

A: Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.

 What about requirements?

A: Requirement specifications are important and one of the most reliable methods of insuring problems in a complex software project is to have poorly documented requirement specifications. Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable and testable. A non-testable requirement would be, for example, "user-friendly", which is too subjective. A testable requirement would be something such as, "the product shall allow the user to enter their previously-assigned password to access the application". Care should be taken to involve all of a project's significant customers in the requirements process. Customers could be in-house or external and could include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, future software maintenance engineers, salespeople and anyone who could later derail the project. If his/her expectations aren't met, they should be included as a customer, if possible. In some organizations, requirements may end up in high-level project plans, functional specification documents, design documents, or other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by test engineers in order to properly plan and execute tests. Without such documentation there will be no clear-cut way to determine if a software application is performing correctly.


 What is a test plan?

A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it. 

   
 What is a test case?

A: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a...

v  Test case identifier;
v  Test case name;
v  Objective;
v  Test conditions/setup;
v  Input data requirements/steps, and
v  Expected results.

Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible.

 What should be done after a bug is found?

 A: When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.

 What is configuration management?

A: Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily adapt to your software tool and process needs. 

 What if the software is so buggy it can't be tested at all?

A: In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem.

 How do you know when to stop testing?

A: This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are...

v  Deadlines, e.g. release deadlines, testing deadlines;
v  Test cases completed with certain percentage passed;
v  Test budget has been depleted;
v  Coverage of code, functionality, or requirements reaches a specified point;
v  Bug rate falls below a certain level; or
v  Beta or alpha testing period ends. 

 What if there isn't enough time for thorough testing?

A: Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:

v  Which functionality is most important to the project's intended purpose?
v  Which functionality is most visible to the user?
v  Which functionality has the largest safety impact?
v  Which functionality has the largest financial impact on users?
v  Which aspects of the application are most important to the customer?
v  Which aspects of the application can be tested early in the development cycle?
v  Which parts of the code are most complex and thus most subject to errors?
v  Which parts of the application were developed in rush or panic mode?
v  Which aspects of similar/related previous projects caused problems?
v  Which aspects of similar/related previous projects had large maintenance expenses?
v  Which parts of the requirements and design are unclear or poorly thought out?
v  What do the developers think are the highest-risk aspects of the application?
v  What kinds of problems would cause the worst publicity?
v  What kinds of problems would cause the most customer service complaints?
v  What kinds of tests could easily cover multiple functionalities?
v  Which tests will have the best high-risk-coverage to time-required ratio?


 What if the project isn't big enough to justify extensive testing?

A: Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under "What if there isn't enough time for thorough testing?" do apply. The test engineer then should do "ad hoc" testing, or write up a limited test plan based on the risk analysis.

 What can be done if requirements are changing continuously?

A: Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application's initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to...
v  Ensure the code is well commented and well documented; this makes changes easier for the developers.
v  Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.
v  In the project's initial schedule, allow for some extra time to commensurate with probable changes.
v  Move new requirements to a 'Phase 2' version of an application and use the original requirements for the 'Phase 1' version.
v  Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.
v  Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that's their job.
v  Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
v  Design some flexibility into automated test scripts;
v  Focus initial automated testing on application aspects that are most likely to remain unchanged; Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans.

 What if the application has functionality that wasn't in the requirements?

A: It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate, deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk.

How can SWQA processes be implemented without stifling productivity?

A: Implement QA processes slowly over time. Use consensus to reach agreement on processes and adjust and experiment as an organization grows and matures. Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings and promote training as part of the QA process. However, no one, especially talented technical types, like bureaucracy and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.

What is BRD?

A: BRD stands for Business requirement document. This document specifies the needs and the requirements of the customer.

 How is testing affected by object-oriented designs?

A: A well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well designed this can simplify test design.

 Why do you recommend that we should start testing during the design phase?

A: Because testing during the design phase can prevent defects later on. I recommend we verify three things...
v  Verify the design is good, efficient, compact, testable and maintainable.
v  Verify the design meets the requirements and is complete (specifies all relationships between modules, how to pass data, what happens in exceptional circumstances, starting state of each module and how to guarantee the state of each module).
v  Verify the design incorporates enough memory, I/O devices and quick enough runtime for the final product.

 What is software quality assurance?

A: Quality assurance is a planned and systematic set if activities necessary to provide adequate confidence that the product and services will confirm to specified requirement and meet user needs.
v  Process oriented
v  Defect prevention based.

 What is quality control?

A: Quality control is the process by which the product quality is compared with applicable standards and the action taken when non-conformance is detected.
v  Product oriented
v  Defect detecting based.
Process and procedures – why to follow them?

A: Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a process is repeatable. Once Rob Davis has learned and reviewed customer's business processes and procedures, he will follow them. He will also recommend improvements and/or additions.

 Standards and templates - what is supposed to be in a document?

A: All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.

 What are the different levels of testing?

A:The different levels of testing are
v  Unit Testing
v  Integration Testing
v  System testing
v  User Acceptance Testing

 What is black box testing?

A: Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality.

What is white box testing?

A: White box testing is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths and conditions.


What is unit testing?

A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.

What is parallel/audit testing?

A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.

What is functional testing?

A: Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers should perform functional testing.
What is usability testing?

A: Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Test engineers are needed, because programmers and developers are usually not appropriate as usability testers.

What is incremental integration testing?

A: Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application's functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.

What is integration testing?

A: Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

What is system testing?

A: System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.
Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by SWQA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels.

What is end-to-end testing?

A: End-to-end testing is similar to system testing, the *macro* end of the test scale; it is the testing a complete application in a situation that mimics real life use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.

What is regression testing?

A: The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify that changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.
What is sanity testing?

A: Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc. 

What is bug life cycle?

Bug life cycles are similar to software development life cycles. At any time during the software development life cycle errors can be made during the gathering of requirements, requirements analysis, functional design, internal design, documentation planning, document preparation, coding, unit testing, test planning, integration, testing, maintenance, updates, re-testing and phase-out. Bug life cycle begins when a programmer, software developer, or architect makes a mistake, creates an unintentional software defect, i.e. bug, and ends when the bug is fixed, and the bug is no longer in existence.

When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested.

Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking, management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.

What is traceability matrix?

A document showing the relationship between Test Requirements and Test Cases.

What is workflow testing?

Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

What is usecase?

The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software, as an end-user would conduct their day-to-day activities.

 What is a golden bug ?

Golden bug is nothing but the difference between our perception and reality.

What is actual testing process in practical or company environment?

Today I got  interesting question from reader, How testing is carried out in company i.e in practical environment? Those who get just out of college and start for searching the jobs have this curiosity,  How would be the actual working environment in the companies?
Here I focus on software Testing actual working process in the companies. As of now I got good experience of software testing career and day to day testing activities.  So I will try to share more practically rather than theoretically.
Whenever we get any new project there is initial project familiarity meeting. In this meeting we basically discuss on who is client? what is project duration and when is delivery? Who is involved in project i.e manager, Tech leads, QA leads, developers, testers etc etc..?
From the SRS (software requirement specification), project plan(SDLC) is developed. The responsibility of testers is to create software test plan from this SRS and project plan. Developers start coding from the design. The project work is divided into different modules and these project modules are distributed among the developers. In meantime testers responsibility is to create test scenario and write test cases according to assigned modules. We try to cover almost all the functional test cases from SRS.  The data can be maintained manually in some excel test case templates or bug tracking tools.
When developers finish individual modules, those modules are assigned to testers.  Smoke testing is performed on these modules and if they fail this test, modules are reassigned to respective developers for fix. For passed modules manual testing is carried out from the written test cases. If any bug is found that get assigned to module developer and  get logged in bug tracking tool. On bug fix, tester do bug verification and regression testing of all related modules. If bug passes the verification it is marked as verified and marked as closed. Otherwise above mentioned bug cycle gets repeated.(I will cover bug life cycle in other post)
Different tests are performed on individual modules and integration testing on module integration. These tests includes Compatibility testing i.e testing application on different hardware, OS versions,  software platform, different browsers etc. Load and stress testing is also carried out according to SRS. Finally system testing is performed by creating virtual client environment. On passing all the test cases test report is prepared and decision is taken to release the product!
So this was a brief outline of process of project life cycle.
Here is detail of each step what testing exactly carried out in each software quality and testing life cycle specified by IEEE and ISO standards:
Review of the software requirement specifications
Objectives is set for the Major releases
Target Date planned for the Releases
Detailed Project Plan is build. This includes the decision on Design Specifications
Develop Test Plan based on Design Specifications
Test Plan : This includes Objectives, Methodology adopted while testing, Features to
be tested and not to be tested, risk criteria, testing schedule, multi-
platform support and the resource allocation for testing.

Test Specifications
This document includes technical details ( Software requirements )
required prior to the testing.

Writing of Test Cases
Smoke(BVT) test cases
Sanity Test cases
Regression Test Cases
Negative Test Cases
Extended Test Cases

Development - Modules developed one by one
Installers Binding: Installers are build around the individual product.
Build procedure :
A build includes Installers of the available products - multiple platforms.

Testing
Smoke Test (BVT)  Basic application test to take decision on further testing

Testing of new features
1. Cross-platform testing
2. Stress testing and memory leakage testing.

Bug Reporting
Bug report is created

Development - Code freezing
No more new features are added at this point.

Testing
Builds and regression testing.

Decision to release the product
Post-release Scenario for further objectives.

How to write effective Test cases, procedures and definitions

Writing effective test cases is a skill and that can be achieved by some experience and in-depth study of the application on which test cases are being written.
Here I will share some tips on how to write test cases, test case procedures and some basic test case definitions.
What is a test case?
“A test case has components that describes an input, action or event and an expected response, to determine if a feature of an application is working correctly.”
There are levels in which each test case will fall in order to avoid duplication efforts.
Level 1: In this level you will write the basic test cases from the available specification and user documentation.
Level 2: This is the practical stage in which writing test cases depend on actual functional and system flow of the application.
Level 3: This is the stage in which you will group some test cases and write a test procedure. Test procedure is nothing but a group of small test cases maximum of 10.
Level 4: Automation of the project. This will minimize human interaction with system and thus QA can focus on current updated functionalities to test rather than remaining busy with regression testing.

So you can observe a systematic growth from no testable item to a Automation suit.
Why we write test cases?
The basic objective of writing test cases is to validate the testing coverage of the application. If you are working in any CMMi company then you will strictly follow test cases standards. So writing test cases brings some sort of standardization and minimizes the ad-hoc approach in testing.
How to write test cases?
Here is a simple test case format
Fields in test cases:
Test case id:
Unit to test:
What to be verified?
Assumptions:
Test data:
Variables and their values
Steps to be executed:
Expected result:
Actual result:
Pass/Fail:
Comments:

So here is a basic format of test case statement:
Verify
Using
[tool name, tag name, dialog, etc]
With [conditions]
To [what is returned, shown, demonstrated]

Verify: Used as the first word of the test case statement.
Using: To identify what is being tested. You can use ‘entering’ or ‘selecting’ here instead of using depending on the situation.

For any application basically you will cover all the types of test cases including functional, negative and boundary value test cases.
Keep in mind while writing test cases that all your test cases should be simple and easy to understand. Don’t write explanations like essays. Be to the point.
Try writing the simple test cases as mentioned in above test case format. Generally I use Excel sheets to write the basic test cases. Use any tool like ‘Test Director’ when you are going to automate those test cases.
Feel free to comment below any query regarding test case writing or execution.

 

Sample bug report

Here is the example scenario that caused a bug:
Lets assume in your application under test you want to create a new user with user information, for that you need to logon into the application and navigate to USERS menu > New User, then enter all the details in the ‘User form’ like, First Name, Last Name, Age, Address, Phone etc. Once you enter all these information, you need to click on ‘SAVE’ button in order to save the user. Now you can see a success message saying, “New User has been created successfully”.
But when you entered into your application by logging in and navigated to USERS menu > New user, entered all the required information to create new user and clicked on SAVE button. BANG! The application crashed and you got one error page on screen. (Capture this error message window and save as a Microsoft paint file)
Now this is the bug scenario and you would like to report this as a BUG in your bug-tracking tool.
How will you report this bug effectively?
Here is the sample bug report for above mentioned example:
(Note that some ‘bug report’ fields might differ depending on your bug tracking system)

SAMPLE BUG REPORT:
Bug Name: Application crash on clicking the SAVE button while creating a new user.
Bug ID: (It will be automatically created by the BUG Tracking tool once you save this bug)
Area Path: USERS menu > New Users
Build Number: Version Number 5.0.1
Severity: HIGH (High/Medium/Low) or 1
Priority: HIGH (High/Medium/Low) or 1
Assigned to: Developer-X
Reported By: Your Name
Reported On: Date
Reason: Defect
Status: New/Open/Active (Depends on the Tool you are using)
Environment: Windows 2003/SQL Server 2005

Description:
Application crash on clicking the SAVE button while creating a new
user, hence unable to create a new user in the application.

Steps To Reproduce:
1) Logon into the application
2) Navigate to the Users Menu > New User
3) Filled all the user information fields
4) Clicked on ‘Save’ button
5) Seen an error page “ORA1090 Exception: Insert values Error…”
6) See the attached logs for more information (Attach more logs related to bug..IF any)
7) And also see the attached screenshot of the error page.

Expected result: On clicking SAVE button, should be prompted to a success message “New User has been created successfully”.
(Attach ‘application crash’ screen shot.. IF any)
Save the defect/bug in the BUG TRACKING TOOL.  You will get a bug id, which you can use for further bug reference.
Default ‘New bug’ mail will go to respective developer and the default module owner (Team leader or manager) for further action.

How Domain knowledge is Important for testers?

First of all I would like to introduce three dimensional testing career mentioned by Danny R. Faught. There are three categories of skill that need to be judged before hiring any software tester. What are those three skill categories?
1) Testing skill
2) Domain knowledge
3) Technical expertise.

No doubt that any tester should have the basic testing skills like Manual testing and Automation testing. Tester having the common sense can even find most of the obvious bugs in the software. Then would you say that this much testing is sufficient? Would you release the product on the basis of this much testing done? Certainly not. You will certainly have a product look by the domain expert before the product goes into the market.
While testing any application you should think like a end-user. But every human being has the limitations and one can’t be the expert in all of the three dimensions mentioned above. (If you are the experts in all of the above skills then please let me know ;-)) So you can’t assure that you can think 100% like how the end-user going to use your application. User who is going to use your application may be having a good understanding of the domain he is working on. You need to balance all these skill activities so that all product aspects will get addressed.
Nowadays you can see the professional being hired in different companies are more domain experts than having technical skills. Current software industry is also seeing a good trend that many professional developers and domain experts are moving into software testing.
We can observe one more reason why domain experts are most wanted! When you hire fresh engineers who are just out of college you cannot expect them to compete with the experienced professionals. Why? Because experienced professional certainly have the advantage of domain and testing experience and they have better understandings of different issues and can deliver the application better and faster.
Here are some of the examples where you can see the distinct edge of domain knowledge:
1) Mobile application testing.
2) Wireless application testing
3) VoIP applications
4) Protocol testing
5) Banking applications
6) Network testing

How will you test such applications without knowledge of specific domain? Are you going to test the BFSI applications (Banking, Financial Services and Insurance) just for UI or functionality or security or load or stress? You should know what are the user requirements in banking, working procedures, commerce background, exposure to brokerage etc and should test application accordingly, then only you can say that your testing is enough - Here comes the need of subject-matter experts.
Let’s take example of my current project: I am currently working on search engine application. Where I need to know the basic of search engine terminologies and concepts. Many times I see some other team tester’s asking me questions like what is ‘publishers’ and ‘advertisers’, what is the difference and what they do? Do you think they can test the application based on current online advertising and SEO? Certainly not. Unless and until they get well familiar with these terminologies and functionalities.
When I know the functional domain better I can better write and execute more test cases and can effectively simulate the end user actions which is distinctly a big advantage.
Here is the big list of the required testing knowledge:
  • Testing skill
  • Bug hunting skill
  • Technical skill
  • Domain knowledge
  • Communication skill
  • Automation skill
  • Some programming skill
  • Quick grasping
  • Ability to Work under pressure …
That is going to be a huge list. So you will certainly say, do I need to have these many skills? Its’ depends on you. You can stick to one skill or can be expert in one skill and have good understanding of other skills or balanced approach of all the skills. This is the competitive market and you should definitely take advantage of it. Make sure to be expert in at least one domain before making any move.
What if you don’t have enough domain knowledge?
You will be posted on any project and company can assign any work to you. Then what if you don’t have enough domain knowledge of that project? You need to quickly grasp as many concepts as you can. Try to understand the product as if you are the customer and what customer will do with application. Visit the customer site if possible know how they work with the product, Read online resources about the domain you want to test the application, participate in events addressing on such domain, meet the domain experts. Or either company will provide all this in-house training before assigning any domain specific task to testers.

There is no specific stage where you need this domain knowledge. You need to apply your domain knowledge in each and every software testing life cycle.

 

 

 

 

Bug life cycle

What is Bug/Defect?
Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program’s source code or its design.”
Other definitions can be:
An unwanted and unintended property of a program or piece of hardware, especially one that causes it to malfunction.

or
A fault in a program, which causes the program to perform in an unintended or unanticipated manner.

Lastly the general definition of bug is: “failure to conform to specifications”.
If you want to detect and resolve the defect in early development stage, defect tracking and software development phases should start simultaneously.
We will discuss more on Writing effective bug report in another article. Let’s concentrate here on bug/defect life cycle.
Life cycle of Bug:
1) Log new defect
When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to Reproduce

In above list you can add some optional fields if you are using manual Bug submission template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments or screenshots.

The following fields remain either specified or blank:
If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can specify these fields. Otherwise Test manager will set status, Bug priority and assign the bug to respective module owner.





Look at the following Bug life cycle:
Bug life cycle
[Click on the image to view full size] Ref: Bugzilla bug life cycle
The figure is quite complicated but when you consider the significant steps in bug life cycle you will get quick idea of bug life.
On successful logging the bug is reviewed by Development or Test manager. Test manager can set the bug status as Open, can Assign the bug to developer or bug may be deferred until next release.
When bug gets assigned to developer and can start working on it. Developer can set bug status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.
If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with specific action. If bug is fixed then QA verifies the bug and can set the bug status as verified closed or Reopen.
Bug status description:
These are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using.

1) New: When QA files new bug.
2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.
3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.
4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.
6) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.
8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.
9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.

Manual Testing Interview
What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than:
• Hire good people
• Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer
• Everyone in the organization should be clear on what 'quality' means to the customer

How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing.

How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
• What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
• Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how much?
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
• How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
• How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
Some sources of site security information include the Usenet newsgroup 'comp.security.announce' and links concerning web site security in the 'Other Resources' section.
Some usability guidelines to consider - these are subjective and may or may not apply to a given situation (Note: more information on usability testing issues can be found in articles about web site usability in the 'Other Resources' section):
• Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.
• The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.
• Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.
• All pages should have links external to the page; there should be no dead-end pages.
• The page owner, revision date, and a link to a contact person or organization should be included on each page.
Many new web site test tools have appeared in the recent years and more than 280 of them are listed in the 'Web Test Tools' section.

How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well-designed this can simplify test design.

What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. It was created by Kent Beck who described the approach in his book 'Extreme Programming Explained' (See the Softwareqatest.com Books page.). Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first - before the application is developed. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help develop scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected. For more info see the XP-related listings in the Softwareqatest.com 'Other Resources' section.

Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. The physician was asked which of his family was the most skillful healer. He replied,
"I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords."
"My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbors."
"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home."

Why does software have bugs?
miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).
software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well-engineered.
programming errors - programmers, like anyone else, can make mistakes.
• changing requirements (whether documented or undocumented) - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control - see 'What can be done if requirements are changing continuously?' in Part 2 of the FAQ.
time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
egos - people prefer to say things like:
'no problem'
'piece of cake'
'I can whip that out in a few hours'
'it should be easy to update that old code'
instead of:
'that adds a lot of complexity and we could end up
making a lot of mistakes'
'we have no idea if we can do that; we'll wing it'
'I can't estimate how long it will take, until I
take a close look at it'
'we can't figure out what that old spaghetti code
did in the first place'

If there are too many unrealistic 'no problem's', the result is bugs.

poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').
software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

How can new Software QA processes be introduced in an existing organization?
• A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
• Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.
• For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.
• The most value for effort will be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation and (b) design inspections and code inspections.

What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.

What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often hard for management to get serious about quality assurance?'. Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective than bug detection.
What is stub? Explain in testing point of view?
Stub is a dummy program or component, the code is not ready for testing, it's used for testing...that means, in a project if there are 4 modules and last is remaining and there is no time then we will use dummy program to complete that fourth module and we will run whole 4 modules also. The dummy program is also known as stub.

4 comments:

  1. You gave nice post to us. Thanks for spending the time to discuss this, I feel strongly about it and love learning more on this topic.
    Usability Testing Tool

    ReplyDelete
  2. This is amezin...!!!
    Thank you so much...
    it will be very helpful to us.

    ReplyDelete
  3. Heya¡­my very first comment on your site. ,I have been reading your blog for a while and thought
    I would completely pop in and drop a friendly note. . It is great stuff indeed.
    I also wanted to ask..is there a way to subscribe to your site via email?

    Software Estimation Techniques Training in Chennai

    ReplyDelete