Saturday, 31 December 2011

What Makes Your Bug Report PERFECT?

We live with bugs! Yes, everything revolves around a bug for us software testers. Whatever we talk, measure, find, Report are all in terms of bugs. But, do we actually know how much responsibility lies in our hands when we find a bug? How much contribution we are making towards the Product we are testing to make it usable by millions of people across the globe? A ‘Bug’ is the only way we communicate with our client to tell them what is missing or what is wrong with their product. So, when you submit a bug, you are not actually breaking the product to see how well you are at it! Instead, you are helping the product to be perfect.
Bug Report
So, just submitting a bug is not enough. You need to understand what we have submitted as well. I have heard lot of new joiners tell this- “What big deal in submitting a bug? We have a readymade defect logging template, we just fill it and so a bug is logged”. Should this be the attitude, then we can log millions of bugs in a single day! But, just submitting hundreds of bugs and increasing your bug count is not enough.
Will you be called a good tester if you only have your large bug count? NO- You also need to make the developer understand your bug. What’s the point if you just submit it and there is no one who can solve the problem? All your effort of finding the issue is gone waste. Suppose, you logged a bug which breaks a major functionality but while writing your report, you missed out some info which says in which area the bug was caused. Can anyone understand? Will they be able to solve it? It will either be rejected as improper info or will be assigned back to you. It will just result in a waste of time for you as well as developer.
And, if it’s a major issue then imagine how much impact it could have on the business, on the client? And, of course your job :)
There is no perfect mantra for a good bug report. The basic point here is you should be able to convey the issue to the developer in a proper manner and with all the data so that it will be solved easily. Below are some of the ‘ingredients’ of a good bug report.
Bug reports usually consist of a bug title, the steps, and a screenshot. But, it can be made more helpful if you add a couple of points more:

A bug report should go like this:

#1 Bug Title:

This should be a short (Just about a line) and easy to understand description of what exactly the problem is. It should ideally describe the area in which the bug is along with the error message if any. For example, if there is a bug in the Installation of a product when you select custom install, your bug title should be something like- “Error during custom install” or “Product fails to install, gives 1234 error when custom installation”. The second one would be more apt as it also describes your error code. But, if the error code is very long, try avoiding it in the title, you can write the same in description

#2 Repro Steps:

The repro steps or the reproducibility steps are the most important in a bug report. These steps tell you how you would arrive at the problem. They should be steps from the start of what you did to arrive at this problem. Here, generally we tend to avoid some simple steps which we think are not necessary. But, no step is unimportant. Yes, but also avoid duplicate steps.
Example of repro steps would be-
  1. Install app
  2. Go to fileàopen
  3. Give a file name
  4. Click on cancel
  5. Observe the error
Do not forget to mention where the problem is. That means, don’t just leave the steps as it is. Tell them where you encountered the problem. Also, mention any pre-requisites installed before performing these steps. Do not make the steps repeatable and lengthy.

#3 Actual and Expected Results:

This will describe what is the actual result got and what is the expected result of the issue.

#4 Description:

This might or might not be necessary. It depends if you have some issue that requires explanation like a long error message of any special note.

#5 Priority:

How important the bug is, and how soon it should be resolved.

#6 Severity:

How severe the problem is.

#7 Frequency:

How frequent the issue occurs. For ex- every time, only once etc.

#8 Environment info:

This is your system info like OS version, Browser used (if it’s a Web app), any updates installed on top of it, Flash version if required etc.

#9 Screenshot:

When you give the screenshot make sure you Highlight the area where you think the bug is. This would be very helpful while logging UI bugs. The developer will be able to understand where actually the bug is if the issue is very small to detect otherwise. Naming your screenshots according to the issue would also be a good idea here.

#10 Additional Info:

This will be any additional info like what do you think might cause the problem. Or, if the issue is browser/OS specific.
The basic point to remember while you are submitting a bug is – ‘The person who is reviewing your bug should understand the bug, and in one way it should also help him in solve the problem’. Also, make sure you are not “bugging” the developer too much!! After all, he is also a busy man like us 

How should testers respond when their bugs are rejected by clients?

In the field of contract software testing, you won’t always see eye to eye with the client. What you consider a critical bug, they might see as a non-issue (or worse, a ‘feature’). What you call a major security flaw, they might consider such a remote possibility that it doesn’t even deserve a mention.
You might ask how you bridge such a gap between your level of testing and the client’s level of acceptance and understanding of Product integrity and the testing process in general. The answer is simple:
You don’t.
It isn’t your job to convert the client to your way of thinking. Yes, you can contest a bug that they reject out of hand if you were technically correct to Report it. Sometimes they’ll accept it as valuable feedback, but most of the time they’ll just ignore any contested bugs. This is something that you have to live with.

15 Top Interview Q & A’s in Manual Testing Interviews

  1. Why most of the software company preferred Manual testing even though many automation testing tools are present in the market?
  2. What is Test Coverage?
  3. What are the Type of CMM Levels, Explain Each Level.
  4. How do you go about testing a project? in Manual testing?
  5. what is build? what is build configuration?
  6. What is Entry Criteria & Exit Criteria.
  7. What type of metrics we prepare in testing?
  8. What is Independent Testing?
  9. what is the difference between system integrated testing and integrated system testing and Manual Testing?
  10. Please explain test matrices?

What is Error Guessing and Error Seeding?

Error Guessing is a Test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.
Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal. And also to estimate the number of faults remaining in the program.

Friday, 30 December 2011

QTP Interviw Questions


1.What is Quick test pro?
Its a Mercury interactive's keyword driven testing tool
2.By using QTP what kind of applications we can test?
By using QTP we can test standard windows applications,Web objects,ActiveX controls,and Visual basic applications.
3.What is called as test?
Test is a collection of steps organized into one or more actions,which are used to verify that your application performs as expected
4.What is the meaning of business component?
Its a collections of steps representing a single task in your application. Business components are combined into specific scenario to build business process tests in Mercury Quality center with Business process testing
5.How the test will be created in QTP?
As we navigate through our application,QTP records each step we perform and generates a test or component that graphically displays theses steps in an table-based keyword view.
6.What are all the main tasks which will be accomplished by the QTP after creating a test?
After we have finished recording,we can instruct QTP to check the properties of specific objects in our application by means of enhancement features available in QTP. When we perform a run session,QTP performs each step in our test or component. After the run session ends,we can view a report detailing which steps were performed,and which one succeeded or failed.
7.What is Actions?
A test is composed of actions. The steps we add to a test are included with in the test's actions. By each test begin with a single action. We can divide our test into multiple actions to organize our test.
8.What are all the main stages will involve in QTP while testing?
? Creating tests or business components
? Running tests or business components
? Analyzing results
9.How the creation of test will be accomplished in QTP?
We can create the test or component by either recording a session on our application or web site or building an object repository and adding steps manually to the keyword view using keyword-driven functionality. We can then modify our test with programming statements.
10.What is the purpose of documentation in key word view?
The documentation column of the key word view used to displays a description of each step in easy to understand sentences.
11.Keyword view in QTP is also termed as
Icon based view
12.What is the use of data table in QTP?
parameterizing the test
13.What is the use of working with actions?
To design a modular and efficient tests
14.What is the file extension of the code file and object repository file in QTP?
The extension for code file is .vbs and the extension for object repository is .tsr
15.What are the properties we can use for identifying a browser and page when using descriptive programming?
The name property is used to identify the browser and the title property is used to identify the page
16.What are the different scripting languages we can use when working with QTP?
VB script
17.Give the example where we can use a COM interface in our QTP project?
COM interface appears in the scenario of front end and back end.
18.Explain the keyword createobject with example
createobject is used to create and return a reference to an automation object.
For example:
Dim ExcelSheetSet
ExcelSheet=createobject(“Excel.Sheet”)
19.How to open excel sheet using QTP script?
You can open excel in QTP by using the following command
System.Util.Run”Path of the file”
20.Is it necessary to learn VB script to work with QTP?
Its not mandate that one should mastered in VB script to work with QTP. It is mostly user friendly and for good results we need to have basic VB or concepts which will suffice
21.If WinRunner and QTP both are functional testing tools from the same company. Why a separate tool QTP came in to picture?
QTP has some additional functionality which is not present in WinRunner. For example,you can test(Functionality and Regression testing) an application developed in .Net technology with QTP,which is not possible to test in WinRunner
22.Explain in brief about the QTP automation object model
The test object model is a large set of object types or classes that QTP uses to represent the objects in our application. Each test object has a list of properties that can uniquely identify objects of that class
23.What is a Run-Time data table?
The test results tree also includes the table-shaped icon that displays the run-time data table-a table that shows the values used to run a test containing data table parameters or the data table output values retrieved from a application under test
24.What are all the components of QTP test script?
QTP test script is a combination of VB script statements and statements that use QuickTest test objects ,methods and properties
25. What is test object?
Its an object that QTP uses to represent an object in our application. Each test object has one or more methods and properties that we can use to perform operations and retrieve values for that object. Each object also has a number of identification properties that can describe the object.
26.What are all the rules and guidelines want to be followed while working in expert view?
Case-sensitivity
VB script is not case sensitive and does not differentiate between upper case and lower case spelling of words.
Text strings
When we enter value as a string, that time we must add quotation marks before and after the string
Variables
We can use variables to store strings,integers,arrays and objects. Using variables helps to make our script more readable and flexible.
Parentheses
To achieve the desired result and to avoid the errors,it is important that we use parentheses() correctly in our statements.
Comments
We can add comments to our statements using apostrophe('),either at a beginning of the separate line or at the end of a statement
Spaces
We can add extra blank spaces to our script to improve clarity. These spaces are ignored by the VB script

Wednesday, 28 December 2011

Questions


What are the drawbacks of data driven test?

I think there are no drawbacks of data driven test...
Data driven test is meant for the retesting with the different data. Then it’s not a problem. Might be the question regarding the drawbacks in implementation of Retesting in WinRunner
We can implement data driven test in 4 methods in winrunner
        1. Dynamic Test Data Submission
         2. From Flat files
        3. From frontend grids.
        4. Through Excel Sheet
We can implement it in four different types. We have four means there are some drawbacks in first methods, the drawback is going to give a need for new methods.
Drawbacks:
1.      In the first method user intervention is required., it is not feasible to test with large test

Tuesday, 27 December 2011

CMM(Capability Maturity Model)


CMM(Capability Maturity Model)
The Capability Maturity Model (CMM), also known as the Software CMM (SW-CMM), was first described by Watts Humphrey in his book Managing the Software Process. The CMM is a process model based on software best-practices effective in large-scale, multi-person projects.
The CMM has been retired and not been updated in over 10 years. CMM has been superseded by CMMI (Capability Maturity Model Integration).
The CMM has been used to assess the maturity levels of organization areas as diverse as software engineering, system engineering, project management, risk management, system acquisition, information technology (IT) or personnel management, against a scale of five key processes, namely: Initial, Repeatable, Defined, Managed and Optimized.
CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in Pittsburgh. It has been used extensively for avionics software and government projects around the world.
Currently, some government departments require software development contract organizations to achieve and operate at a level-3 standard.

Maturity model

The Capability Maturity Model (CMM) is a way to develop and refine an organization's processes. The first CMM was for the purpose of developing and refining software development processes. A maturity model is a structured collection of elements that describe characteristics of effective processes. A maturity model provides:
  • a place to start
  • the benefit of a community’s prior experiences
  • a common language and a shared vision
  • a framework for prioritizing actions
  • a way to define what improvement means for your organization.
A maturity model can be used as a benchmark for assessing different organizations for equivalent comparison. The model describes the maturity of the company based upon the project the company is handling and the related clients.

Levels of the CMM

Level 1 - Initial

At maturity level 1, processes are usually ad hoc, and the organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization, and not on the use of proven processes. In spite of this ad hoc, chaotic environment, maturity level 1 organizations often produce products and services that work; however, they frequently exceed the budget and schedule of their projects.
Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes again.
Level 1 software project success depends on having high quality people.

Level 2 - Repeatable [Managed]

At maturity level 2, software development successes are repeatable. The processes may not repeat for all the projects in the organization. The organization may use some basic project management to track cost and schedule.
Process discipline helps ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans.
Project status and the delivery of services are visible to management at defined points (for example, at major milestones and at the completion of major tasks).
Basic project management processes are established to track cost, schedule, and functionality. The minimum process discipline is in place to repeat earlier successes on projects with similar applications and scope. There is still a significant risk of exceeding cost and time estimates.

Level 3 - Defined

The organization’s set of standard processes, which are the basis for level 3, are established and improved over time. These standard processes are used to establish consistency across the organization. Projects establish their defined processes by applying the organization’s set of standard processes, tailored, if necessary, within similarly standardized guidelines.
The organization’s management establishes process objectives for the organization’s set of standard processes, and ensures that these objectives are appropriately addressed.
A critical distinction between level 2 and level 3 is the scope of standards, process descriptions, and procedures. At level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on each particular project). At level 3, the standards, process descriptions, and procedures for a project are tailored from the organization’s set of standard processes to suit a particular project or organizational unit.

Level 4 - Quantitatively Managed

Using precise measurements, management can effectively control the software development effort. In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Organizations at this level set quantitative quality goals for both software process and software maintenance. Subprocesses are selected that significantly contribute to overall process performance. These selected subprocesses are controlled using statistical and other quantitative techniques. A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable.

Level 5 - Optimizing

Maturity level 5 focuses on continually improving process performance through both incremental and innovative technological improvements. Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement. The effects of deployed process improvements are measured and evaluated against the quantitative process-improvement objectives. Both the defined processes and the organization’s set of standard processes are targets of measurable improvement activities.
Process improvements to address common causes of process variation and measurably improve the organization’s processes are identified, evaluated, and deployed.
Optimizing processes that are nimble, adaptable and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization. The organization’s ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning.
A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At maturity level 4, processes are concerned with addressing special causes of process variation and providing statistical predictability of the results. Though processes may produce predictable results, the results may be insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing common causes of process variation and changing the process (that is, shifting the mean of the process performance) to improve process performance (while maintaining statistical probability) to achieve the established quantitative process-improvement objectives.

Six Sigma

Six Sigma is a disciplined, data-driven approach and methodology for eliminating defects (driving towards six standard deviations between the mean and the nearest specification limit) in any process -- from manufacturing to transactional and from product to service.
The fundamental objective of the Six Sigma methodology is the implementation of a measurement-based strategy that focuses on process improvement and variation reduction through the application of Six Sigma improvement projects. This is accomplished through the use of two Six Sigma sub-methodologies:
DMAIC
The Six Sigma DMAIC process (define, measure, analyze, improve, control) is an improvement system for existing processes falling below specification and looking for incremental improvement.
DMADV
The Six Sigma DMADV process (define, measure, analyze, design, verify) is an improvement system used to develop new processes or products at Six Sigma quality levels. It can also be employed if a current process requires more than just incremental improvement.

Monday, 26 December 2011

Common Questions for Testers


Common Questions for Testers

 

  1. What is the testing process?
Verifying that an input data produce the expected output.
  1. What is the difference between testing and debugging?
Big difference is that debugging is conducted by a programmer and the programmer fix the errors during debugging phase. Tester never fixes the errors, but rather find them and return to programmer.
  1. What is the difference between structural and functional testing?

Wednesday, 21 December 2011

Interview Questions for Manual Testing


 What is installation testing?

A: Installation testing is the testing of a full, partial, or upgrade install/uninstall process. The installation test is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application's System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. Following installation testing, a sanity test is performed when necessary.

 What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

 What is recovery/error testing?

Tuesday, 29 November 2011

Automation Testing

 
                                                         Automation Testing
There are two ways of Testing.
1.      Manual Testing.
2.      Automation Testing.
1). Manual Testing

It is a process in which all the phases of Software Testing Life Cycle like Test Planning, Test Development, Test Execution, Result Analysis, Bug Tracking and Reporting are accomplished successfully manually with human efforts.

 

   Drawbacks of Manual Testing


1.      More number of people is required.
2.      Time consuming.
3.      No accuracy.
4.      Tiredness.
5.      Simultaneous actions are almost impossible.
6.      Repeating the task in a same manner is not so easy manually.

2). Automation Testing

Thursday, 3 November 2011

Software Testing

Software Testing
Testing is the process of exercising or evaluating a system or a system components by manual or automated means to verify that it satisfies specified requirement.
                                           or
To detect the difference between existing and required condition is called testing
                                           or   
Testing is the process of executing a program with the intent of finding errors.
                                                             (Or)
Verifying and validating the application with respect to customer requirements.
                                                              (Or)
Finding the differences between customer expected and actual values.
Quality
Quality is defined as not only the justification of the requirement but also the present of value (user friendly).
·         IT’s view of quality software means meeting requirements.
·         User’s of software view of quality software means fit for use.