I) Introduction When we can measure what we are speaking about and express it in numbers, we know something about it; but when we cannot measure, when we cannot express it in numbers, our knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but we have scarcely, in your thoughts, advanced to the stage of science. Why we need Metrics? “We cannot improve what we cannot measure.” “We cannot control what we cannot measure” AND TEST METRICS HELPS IN II) Type of metrics Base Metrics (Direct Measure) Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics. Ex: # of Test Cases, # of Test Cases Executed Calculated Metrics (Indirect Measure) Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project). Ex: % Complete, % Test Coverage Base Metrics & Test Phases Calculated Metrics & Phases The below metrics are created at Test Reporting Phase or Post test Analysis phase III) Crucial Web Based Testing Metrics Test Plan coverage on Functionality Total number of requirement v/s number of requirements covered through test scripts. Define requirements at the time of Effort estimation Example: Total number of requirements estimated are 46, total number of requirements tested 39; blocked 7…define what is the coverage? Note: Define requirement clearly at project level Test Case defect density Total number of errors found in test scripts v/s developed and executed. Example: Total test script developed 1360, total test script executed 1280, total test script passed 1065, total test script failed 215 So, test case defect density is 215 X 100 ---------------------------- = 16.8% 1280 This 16.8% value can also be called as test case efficiency %, which is depends upon total number of test cases which uncovered defects Defect Slippage Ratio Number of defects slipped (reported from production) v/s number of defects reported during execution. Example: Customer filed defects are 21, total defect found while testing are 267, total number of invalid defects are 17 So, Slippage Ratio is [21/ (267-17)] X 100 = 8.4% Requirement Volatility Number of requirements agreed v/s number of requirements changed. Example: VSS 1.3 release had total 67 requirements initially, later they added another 7 new requirements and removed 3 from initial requirements and modified 11 requirements So, requirement Volatility is (7 + 3 + 11) * 100/67 = 31.34% Means almost 1/3 of the requirement changed after initial identification Review Efficiency The Review Efficiency is a metric that offers insight on the review quality and testing Some organization also use this term as “Static Testing” efficiency and they are aiming to get min of 30% defects in static testing Review efficiency=100*Total number of defects found by reviews/Total number of project defects Example: A project found total 269 defects in different reviews, which were fixed and test team got 476 defects which were reported and valid So, Review efficiency is [269/(269+476)] X 100 = 36.1% Efficiency and Effectiveness of Processes Metrics for Software Testing • Defect Removal Effectiveness DRE= (Defects removed during development phase x100%) / Defects latent in the product Defects latent in the product = Defects removed during development Phase+ defects found later by user • Efficiency of Testing Process (define size in KLoC or FP, Req.) Testing Efficiency= Size of Software Tested /Resources used References - References: 1. Measuring software product quality during testing by Rob Hendriks, Robert van Vonderen and Erik van Veenendaal 2. Software Metrics: A Rigorous Approach by: Norman E. Fenton 3. Software Test Metrics – A Practical Approach by Shaun Bradshaw 4. Testing Effectiveness Assessment (an article by Software Quality Consulting) 5. P.R.Vidyalakshmi 6. Measurement of the Extent of Testing (http://www.kaner.com/pnsqc.html) 7. http://www.stickyminds.com/ 8. Effective Test Status Reporting by Rex Black 9. http://www.projectmanagement.tas.gov.au/guidelines/pm5_10.htm 11. Risk Based Test Reporting by Paul Gerrard
Tuesday, October 12, 2010
Useful Software Test Metrics
Posted by QAsourceone on 3:47 PM
Popular Posts
- IBM RFT (Rational Functional Tester) & HP QTP (QuickTest Professional)
- Performance vs. load vs. stress testing
- Selenium: Overview for beginners
- QTP interview questions and answers
- Software Testing Certifications – Eligibility, Exam Patterns and How to Apply
- What are some recent major computer system failures caused by software bugs?
- Advance QTP FAQ Part-1
- Top 10 Open Source Bug Tracking Tools
- Start with Rational Functional tester (RFT)
1 comments:
Very helpful dude.. Its one of the important question for 4+ exp people in interviews.
Post a Comment