Tuesday, October 12, 2010

Useful Software Test Metrics

I) Introduction

When we can measure what we are speaking about and express it in numbers, we know something about it; but when we cannot measure, when we cannot express it in numbers, our knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but we have scarcely, in your thoughts, advanced to the stage of science.

Why we need Metrics?

“We cannot improve what we cannot measure.”

“We cannot control what we cannot measure”

AND TEST METRICS HELPS IN

  • Take decision for next phase of activities
  • Evidence of the claim or prediction
  • Understand the type of improvement required
  • Take decision on process or technology change

II) Type of metrics

Base Metrics (Direct Measure)

Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics.

Ex: # of Test Cases, # of Test Cases Executed

Calculated Metrics (Indirect Measure)

Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project).

Ex: % Complete, % Test Coverage

Base Metrics & Test Phases

  • • # of Test Cases (Test Development Phase)
  • • # of Test Cases Executed (Test Execution Phase)
  • • # of Test Cases Passed (Test Execution Phase)
  • • # of Test Cases Failed (Test Execution Phase)
  • • # of Test Cases Under Investigation (Test Development Phase)
  • • # of Test Cases Blocked (Test dev/execution Phase)
  • • # of Test Cases Re-executed (Regression Phase)
  • • # of First Run Failures (Test Execution Phase)
  • • Total Executions (Test Reporting Phase)
  • • Total Passes (Test Reporting Phase)
  • • Total Failures (Test Reporting Phase)
  • • Test Case Execution Time ((Test Reporting Phase)
  • • Test Execution Time (Test Reporting Phase

Calculated Metrics & Phases

The below metrics are created at Test Reporting Phase or Post test Analysis phase

  • • % Complete
  • • % Defects Corrected
  • • % Test Coverage
  • • % Rework
  • • % Test Cases Passed
  • • % Test Effectiveness
  • • % Test Cases Blocked
  • • % Test Efficiency
  • • 1st Run Fail Rate
  • • Defect Discovery Rate
  • • Overall Fail Rate

III) Crucial Web Based Testing Metrics

Test Plan coverage on Functionality

Total number of requirement v/s number of requirements covered through test scripts.

  • • (No of requirements covered / total number of requirements) * 100

Define requirements at the time of Effort estimation

Example: Total number of requirements estimated are 46, total number of requirements tested 39; blocked 7…define what is the coverage?

Note: Define requirement clearly at project level

Test Case defect density

Total number of errors found in test scripts v/s developed and executed.

  • • (Defective Test Scripts /Total Test Scripts) * 100

Example: Total test script developed 1360, total test script executed 1280, total test script passed 1065, total test script failed 215

So, test case defect density is

215 X 100

---------------------------- = 16.8%

1280

This 16.8% value can also be called as test case efficiency %, which is depends upon total number of test cases which uncovered defects

Defect Slippage Ratio

Number of defects slipped (reported from production) v/s number of defects reported during execution.

  • • Number of Defects Slipped / (Number of Defects Raised - Number of Defects Withdrawn)

Example: Customer filed defects are 21, total defect found while testing are 267, total number of invalid defects are 17

So, Slippage Ratio is

[21/ (267-17)] X 100 = 8.4%

Requirement Volatility

Number of requirements agreed v/s number of requirements changed.

  • • (Number of Requirements Added + Deleted + Modified) *100 / Number of Original Requirements
  • • Ensure that the requirements are normalized or defined properly while estimating

Example: VSS 1.3 release had total 67 requirements initially, later they added another 7 new requirements and removed 3 from initial requirements and modified 11 requirements

So, requirement Volatility is

(7 + 3 + 11) * 100/67 = 31.34%

Means almost 1/3 of the requirement changed after initial identification

Review Efficiency

The Review Efficiency is a metric that offers insight on the review quality and testing

Some organization also use this term as “Static Testing” efficiency and they are aiming to get min of 30% defects in static testing

Review efficiency=100*Total number of defects found by reviews/Total number of project defects

Example: A project found total 269 defects in different reviews, which were fixed and test team got 476 defects which were reported and valid

So, Review efficiency is [269/(269+476)] X 100 = 36.1%

Efficiency and Effectiveness of Processes

  • Effectiveness: Doing the right thing. It deals with meeting the desirable attributes that are expected by the customer.
  • Efficiency: Doing the thing right. It concerns the resources used for the service to be rendered

Metrics for Software Testing

• Defect Removal Effectiveness

DRE= (Defects removed during development phase x100%) / Defects latent in the product

Defects latent in the product = Defects removed during development

Phase+ defects found later by user

• Efficiency of Testing Process (define size in KLoC or FP, Req.)

Testing Efficiency= Size of Software Tested /Resources used

References -

References:

1. Measuring software product quality during testing by Rob Hendriks, Robert van Vonderen and Erik van Veenendaal

2. Software Metrics: A Rigorous Approach by: Norman E. Fenton

3. Software Test Metrics – A Practical Approach by Shaun Bradshaw

4. Testing Effectiveness Assessment (an article by Software Quality Consulting)

5. P.R.Vidyalakshmi

6. Measurement of the Extent of Testing (http://www.kaner.com/pnsqc.html)

7. http://www.stickyminds.com/

8. Effective Test Status Reporting by Rex Black

9. http://www.projectmanagement.tas.gov.au/guidelines/pm5_10.htm

10. http://whatis.com/

11. Risk Based Test Reporting by Paul Gerrard

Wednesday, September 15, 2010

Desktop application testing, Client server application testing and Web application testing.

Each one differs in the environment in which they are tested and you will lose control over the environment in which application you are testing, while you move from desktop to web applications.

Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e DB.

In client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.

Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.

Difference between different QTP versions ( 9.2, 9.5, 10.0)


Between QTP 9.5 & earlier versions

1. Checkpoints and Output values can be seen & edited in the OR while same was not possible in the earlier versions of QTP.
2. Installation file is a single one bundled with all add-ins, previously we have to download add-ins separately.
3. Movie recorder was introduced in the results
4. Web add-in extensibility add- in was introduced
5. Maintenance run mode was introduced
6. It also supports the 64 bit windows edition while the earlier were supporting only 32 bit systems only
7. Bitmap tolerance feature was introduced in QTP 9.5

Between QTP 10.0 & Earlier versions

1. Delphi add-in extensibility is introduced
2. System performance parameters can be monitored
3. COM objects for the Bitmap checkpoints has been introduced
4. Complete UI changes
5. Version control with the help of QC 10.0
6. To Do pane
7. Call actions dynamically
8. From QTP results we can directly go on to the script lines
9. All test resources can be save on one go from QC

Difference between QTP8.2 and QTP9.0/QTP9.2

Over and above features provided with QTP 9.0 , QTP 9.2 provides following features:

Mercury Screen Recorder :
Capture your entire run session in a movie clip or capture only the segments with errors, and then view your movie from the Test Results window.
Dynamic Management of Object Repositories:
QuickTest now has a new RepositoriesCollection reserved object that you can use to programmatically manage the set of object repositories that are associated with an action during a run session.

Over and above features provided with QTP 8.2 , QTP 9.0 provides following features:

Object Repository Manager:
You can use the Object Repository Manager to manage all of the shared object repositories in your organization from one, central location. This includes adding and defining objects, modifying objects and their descriptions, parameterizing test object property values, maintaining and organizing repositories, and importing and exporting repositories in XML format.
You can open multiple object repositories at the same time. Each object repository opens in its own resizable document window. This enables you to compare the content of the repositories, to copy or move objects from one object repository to another, and so forth.
Object Repository Merge Tool:
You can use the Object Repository Merge Tool to merge the objects from two shared object repositories into a single shared object repository. You can also use the Object Repository Merge Tool to merge objects from the local object repository of one or more actions or components into a shared object repository.

When you merge objects from two source object repositories, the content is copied to a new, target object repository, ensuring that the information in the source repositories remains unchanged.

If any conflicts occur during the merge, for example, if two objects have the same name and test object class, but different test object descriptions, the relevant objects are highlighted in the source repositories, and the Resolution Options pane details the conflict and possible resolutions.
Multiple Object Repositories per Action or Component:
QuickTest provides several options for storing and accessing test objects. You can store the test objects for each action or component in its corresponding local object repository, which is unique for each action and component. You can also store test objects in one or more shared object repositories that can be used in multiple actions and components. Alternatively, you can use a combination of objects from the local object repository and one or more shared object repositories. You choose the combination that matches your testing needs.
XML Object Repository Format:
QuickTest now enables you to import and export object repositories from and to XML format. This enables you to modify object repositories using the XML editor of your choice and then import them back into QuickTest. You can import and export files either from and to the file system or a Quality Center project (if QuickTest is connected to Quality Center).
Function Library Editor:
QuickTest now has a built-in function library editor, which enables you to create and edit function libraries containing VBScript functions, subroutines, modules, and so forth, and then call their functions from your test or component.
Handling Missing Actions and Resources:
Whenever a testing document (test, component, or application area) contains a resource that cannot be found, QuickTest opens the Missing Resources pane and lists the missing resource(s). For example, a test may contain an action or a call to an action that cannot be found; a testing document may use a shared object repository that cannot be found; or a testing document may use a object repository parameter that does not have a default value. In all of these cases, QuickTest indicates this in the Missing Resources pane, enabling you to map a missing resource to an existing one, or remove it from the testing document, as required.

QTP Version History

Version history of Quick Test Pro .
1998 -> Astra QT
2000 -> QTP 5.0
2001 -> QTP 5.5
2002 -> QTP 6.0
2003 -> QTP 6.5
2004 -> QTP 8.0
2005 -> QTP 8.2
2006 Feb -> QTP 9.0
2007 Jan -> QTP 9.1
2007 Feb -> QTP 9.2
2008 Jan -> QTP 9.5
2009 Feb -> QTP 10

Wednesday, September 8, 2010

Test Metrics

A Test Metric is a standard means of measuring some attribute of the software testing process. They are a means of establishing test progress against the test schedule and may be an indicator of expected future results. Metrics are produced in two forms – Base Metrics and Derived Metrics.

Example of Base Metrics:
# Test Cases
# New Test Cases
# Test Cases Executed
# Test Cases Unexecuted
# Test Cases Re-executed
# Passes
# Fails
# Test Cases Under Investigation
# Test Cases Blocked
# 1st Run Fails
# Test Case Execution Time
# Testers

Example of Derived Metrics:
> Test Cases Complete
> Test Cases Passed
> Test Cases Failed
> Test Cases Blocked
> Test Defects Corrected

Tuesday, September 7, 2010

Important 40 Testing Interview Questions

What’s Ad Hoc Testing ?

A testing where the tester tries to break the software by randomly trying functionality of software.

What’s the Accessibility Testing ?

Testing that determines if software will be usable by people with disabilities.

What’s the Alpha Testing ?

The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the software

What’s the Beta Testing ?

Testing the application after the installation at the client place.

What is Component Testing ?

Testing of individual software components (Unit Testing).

What’s Compatibility Testing ?

In Compatibility testing we can test that software is compatible with other elements ofsystem.

What is Concurrency Testing ?

Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

What is Conformance Testing ?

The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

What is Context Driven Testing ?

The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

What is Data Driven Testing ?

Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

What is Conversion Testing ?

Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

What is Dependency Testing ?

Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

What is Depth Testing ?

A test that exercises a feature of a product in full detail.

What is Dynamic Testing ?

Testing software through executing it. See also Static Testing.

What is Endurance Testing ?

Checks for memory leaks or other problems that may occur with prolonged execution.

What is End-to-End testing ?

Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

What is Exhaustive Testing ?

Testing which covers all combinations of input values and preconditions for an element of the software under test.

What is Gorilla Testing ?

Testing one particular module, functionality heavily.

What is Installation Testing ?

Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is Localization Testing ?

This term refers to making software specifically designed for a specific locality.

What is Loop Testing ?

A white box testing technique that exercises program loops.

What is Mutation Testing ?

Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (’bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources

What is Monkey Testing ?

Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

What is Positive Testing ?

Testing aimed at showing software works. Also known as “test to pass”. See also Negative Testing.

What is Negative Testing ?

Testing aimed at showing software does not work. Also known as “test to fail”. See also Positive Testing.

What is Path Testing ?

Testing in which all paths in the program source code are tested at least once.

What is Performance Testing ?

Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as “Load Testing”.

What is Ramp Testing ?

Continuously raising an input signal until the system breaks down.

What is Recovery Testing ?

Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is the Re-testing testing ?

Retesting- Again testing the functionality of the application.

What is the Regression testing ?

Regression- Check that change in code have not effected the working functionality

What is Sanity Testing ?

Brief test of major functional elements of a piece of software to determine if its basically operational.

What is Scalability Testing ?

Performance testing focused on ensuring the application under test gracefully handles increases in work load.

What is Security Testing ?

Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

What is Stress Testing ?

Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.

What is Smoke Testing ?

A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

What is Soak Testing ?

Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

What’s the Usability testing ?

Usability testing is for user friendliness.

What’s the User acceptance testing ?

User acceptance testing is determining if software is satisfactory to an end-user or customer.

What’s the Volume Testing ?

We can perform the Volume testing, where the system is subjected to large volume of data.

Bug Life Cycle (BLC)

Introduction:

Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT).

Bug Life Cycle:

In software development process the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:


The different states of a bug can be summarized as follows:

1. New

2. Open

3. Assign

4. Test

5. Verified

6. Deferred

7. Reopened

8. Duplicate

9. Rejected and

10. Closed

Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

Related Posts Plugin for WordPress, Blogger...

 
Powered by Blogger