Wednesday, January 11, 2012

How to test software requirements specification (SRS)?

Do you know “Most of the bugs in software are due to incomplete or inaccurate functional requirements?”  The software code, doesn’t matter how well it’s written, can’t do anything if there are ambiguities in requirements.
It’s better to catch the requirement ambiguities and fix them in early development life cycle. Cost of fixing the bug after completion of development or product release is too high.  So it’s important to have requirement analysis and catch these incorrect requirements before design specifications and project implementation phases of SDLC.
How to measure functional software requirement specification (SRS) documents?
Well, we need to define some standard tests to measure the requirements. Once each requirement is passed through these tests you can evaluate and freeze the functional requirements.
Let’s take an example. You are working on a web based application. Requirement is as follows:
“Web application should be able to serve the user queries as early as possible”
How will you freeze the requirement in this case?
What will be your requirement satisfaction criteria? To get the answer, ask this question to stakeholders: How much response time is ok for you?
If they say, we will accept the response if it’s within 2 seconds, then this is your requirement measure. Freeze this requirement and carry the same procedure for next requirement.
We just learned how to measure the requirements and freeze those in design, implementation and testing phases.
Now let’s take other example. I was working on a web based project. Client (stakeholders) specified the project requirements for initial phase of the project development. My manager circulated all the requirements in the team for review. When we started discussion on these requirements, we were just shocked! Everyone was having his or her own conception about the requirements. We found lot of ambiguities in the ‘terms’ specified in requirement documents, which later on sent to client for review/clarification.
Client used many ambiguous terms, which were having many different meanings, making it difficult to analyze the exact meaning. The next version of the requirement doc from client was clear enough to freeze for design phase.
From this example we learned “Requirements should be clear and consistent”
Next criteria for testing the requirements specification is “Discover missing requirements”
Many times project designers don’t get clear idea about specific modules and they simply assume some requirements while design phase. Any requirement should not be based on assumptions. Requirements should be complete, covering each and every aspect of the system under development.
Specifications should state both type of requirements i.e. what system should do and what should not.
Generally I use my own method to uncover the unspecified requirements. When I read the software requirements specification document (SRS), I note down my own understanding of the requirements that are specified, plus other requirements SRS document should supposed to cover. This helps me to ask the questions about unspecified requirements making it clearer.
For checking the requirements completeness, divide requirements in three sections, ‘Must implement’ requirements, requirements those are not specified but are ‘assumed’ and third type is ‘imagination’ type of requirements. Check if all type of requirements are addressed before software design phase.
Check if the requirements are related to the project goal.
Some times stakeholders have their own expertise, which they expect to come in system under development. They don’t think if that requirement is relevant to project in hand. Make sure to identify such requirements. Try to avoid the irrelevant requirements in first phase of the project development cycle. If not possible ask the questions to stakeholders: why you want to implement this specific requirement? This will describe the particular requirement in detail making it easier for designing the system considering the future scope.
But how to decide the requirements are relevant or not?
Simple answer: Set the project goal and ask this question: If not implementing this requirement will cause any problem achieving our specified goal? If not, then this is irrelevant requirement. Ask the stakeholders if they really want to implement these types of requirements.
In short requirements specification (SRS) doc should address following:
Project functionality (What should be done and what should not)
Software, Hardware interfaces and user interface
System Correctness, Security and performance criteria
Implementation issues (risks) if any

Smoke testing and sanity testing – Quick and simple differences

Despite of hundreds of web articles on Smoke and sanity testing, many people still have confusion between these terms and keep on asking to me. Here is a simple and understandable difference that can clear your confusion between smoke testing and sanity testing.
Here are the differences you can see:
SMOKE TESTING:
  • Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.
  • A smoke test is scripted, either using a written set of tests or an automated test
  • A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide.
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification).
  • Smoke testing is normal health check up to a build of an application before taking it to testing in depth.
SANITY TESTING:
  • A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
  • A sanity test is usually unscripted.
  • A Sanity test is used to determine a small section of the application is still working after a minor change.
  • Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.
Hope these points will help you to clearly understand the Smoke and sanity tests and will help to remove any confusion.

10 Tips to Help You Achieve Your Software Testing Documentation Goal

It’s not that we don’t know how to do the documentation right. We just don’t think it’s important.
Everyone must have standard templates for all the kinds of documentation starting from Test strategy, test Plan, Test cases, and Test data to Bug report. These are just to follow some standards (CMMI, ISO etc.) but, when it comes to actual implementation how many of these documents are really used by us? We just need to synchronize our quality process with documentation standards and other process in an organization.

The simplest thing to follow all kind of documentation is to involve a person in project from kick off phase who understands the project dynamics, domain, objective, and technology. And who else better than a QA person for this (of course there are technical writers present to do this – but considering a general scenario of small companies where technical writers are not present).
To achieve this goal of testing and documentation I feel we need to focus on some points.
Here are the top 10 tips to help you achieve your software testing documentation goal:
1. QA should involve at the very first phase of project so that QA and Documentation work hand in hand.
2. Process defined by QA should follow by technical people, this helps remove most of the defects at very initial stage.
3. Only creating and maintaining software testing templates is not enough, force people to use them.
4. Don’t only create and leave document, Update as and when required.
5. Change requirement is important phase of project don’t forget to add them in the list.
6. Use version controlling for everything. This will help you manage and track your documents easily.
7. Make defect remediation process easier by documenting all defects. Make sure to include clear description of defect, reproduce steps, affected area and details about author while documenting any defect.
8. Try to document what is required for you to understand your work and what you will need to produce to your stakeholders whenever required.
9. Use standard template for documentation. Like any excel sheet template or doc file template and stick to it for all your document needs.
10. Share all project related documents at single location, accessible to every team member for reference as well to update whenever required.

Why Documentation is Important in Software Testing

Software Testing Documentation: What’s that?
We all read various articles on testing technologies and methods, but how many of us have seen articles on documentation? No doubt there are few, Is it that documents are not important? No, but its’ because we have not yet realized importance of documents.
But, if we observe then the fact is, projects that have all the documents have high level of maturity. Most companies do not give even a little importance to the documentation as much they give to software development process. If we search on web then we can find various templates on how to create various types of documents. But how many of them are really used by organizations or individuals?
Fact is that, careful documentation can save an organization’s time, efforts and money. While going for any type of certification, why there is stress given on documentation, it’s just because it shows importance of client and processes to individual and organization. Unless you are able to produce document that is comfortable to user no matter how good your product is, no one is going to accept it.
It’s my experience, we own one product, which is having a bit confusing functionality. When I started working on that I asked for some help documents to Manager and I got answer “No, we don’t have any documents” Then I made an issue of that, because as a QA I knew, no one can understand how to use the product without documents or training. And if user is not satisfied, how we are going to make money out of that product?
“Lack of documentation is becoming a problem for acceptance” – Wietse Venema
Even same thing is applicable for User manuals. Take an example of Microsoft, they launch every product with proper documents, even for Office 2007 we have such documents, which are very explanatory and easy to understand for any user. That’s one of the reasons that all their products are successful.
In small companies, we always heard “project rejects in proposal or kickoff phase” it’s just because proposal documentation lacks concise and expressive language, and to show the capability of the organization. It’s not that small companies can’t deliver good quality projects but it’s their inability to express their capability. (Me also working with a small organization of 80 employees, and I heard this many time)
I personally feel Quality is the only department that can make it possible. We are the only department, which can argue on this and can provide successful future to our organizations.
Let’s organize all discussion in few points in quality perspective:
- Clarify quality objective and methods
- Ensure clarity about tasks and consistency of performance
- Ensure internal co-ordination in client work
- Provide feedback for preventive actions
- Provide feedback for your planning cycle
- Create objective evidence of your quality management system’s performance
There are hundreds of documents used in software development and testing life cycle. Here I am listing few important software testing documents that we need to use/maintain regularly:
1) Test plan
2) Test design and Test case specification
3) Test Strategy
4) Test summary reports
5) Weekly Status Report
6) User Documents/ manuals
7) User Acceptance Report
8 ) Risk Assessment
9) Test Log
10) Bug reports
11) Test data
12) Test analysis
Also Software testers regularly need refer following documents:
1) Software requirement specifications
2) Functional documents
Summary: 
Software Testing Documents always play an important role in Project development/testing phase. So always keep things documented whenever possible. Don’t rely on verbal communication. Be always on safe side. Documentation will not only save you but also help organization in long run saving thousands of dollars on training and more importantly on fixing issues caused due to lack of development and testing documents. Don’t document just to avoid finger pointing on you, but habit of documentation will certainly bring a systematic approach in your testing process, leaving the ad hoc testing behind.

Thursday, January 5, 2012

How to find more or better bugs (12 tips to explode your bug count and/ or severity)?

Well, we know that we do not find all the bugs in the application under test (given that the application at hand is not simple). However, we do want to discover and report the most and the best bugs that we can. You need more ideas if you want to find more or better bugs than you do at present. Here are some tips.

Tip 1. Review the application's requirements often. You may notice that no or partial test cases exist for certain requirements. You may find bugs when you test for these requirements. Keep abreast with the change requests/ changes to requirements. Be quick and you may find bugs immediately after a requirement change has been first implemented.

Tip 2. It is possible that you have positive test cases only. If so, create negative test cases for each requirement and execute them.

Tip 3. Execute each test case with a variety of interesting test data (generated for example by the boundary-value analysis technique or pair-wise testing technique).

Tip 4. Test the interface of your application with external systems carefully. This is where a number of bugs may exist.

Tip 5. Another place to look for bugs is the application settings (since this is one area that may be rarely tested in fear that the application may stop working correctly or stop working altogether).

Tip 6. Repeat your tests using other supported client configurations (other CPU/RAM/operating systems/screen resolutions/browsers etc.)

Tip 7. Look at the previous bug reports against the same application or similar applications. See if you can test your application using the ideas or information contained in the previous bug reports.

Tip 8. Do not ignore the cosmetic bugs. If they would inconvenience a user, they should be reported and fixed.

Tip 9. Create a list of items that you would like to test if you had the time. When you test your application, you may find yourself taking mental notes of things that you would like to test later. Jot these things in your list. Come back to the application another time and test the things in your list against all areas of the application.

Tip 10. In case you are working as part of a testing/ QA team, do not restrict yourself to the areas that you are supposed to test. You may find bugs in the other areas. This would also have the nice side-effect of increasing your overall knowledge of the application.

Tip 11. Instead of rushing through your tests at top speed, slow down  (I know it is difficult). You would give yourself time to think more clearly. You would start to observe things about your application that you did not observe before.

Tip 12. Finally, take pride in the bugs reported by you. Feel free to mention an interesting bug that you found to a team member. 

A word of warning. Do not get carried away if you suddenly start finding a lot more/ better bugs. You should still take the time to contemplate each bug, reproduce it, match it to the requirement and carefully create a good bug report.

Enjoy bug-hunting!

Testing vs. Checking

There is confusion in the software development business over a distinction between testing and checking. I will now attempt to make the distinction clearer.

Checking Is Confirmation

Checking is something that we do with the motivation of confirming existing beliefs. Checking is a process of confirmation, verification, and validation. When we already believe something to be true, we verify our belief by checking. We check when we’ve made a change to the code and we want to make sure that everything that worked before still works. When we have an assumption that’s important, we check to make sure the assumption holds. Excellent programmers do a lot ofchecking as they write and modify their code, creating automated routines that they run frequently to check to make sure that the code hasn’t broken. Checking is focused on making sure that the program doesn’t fail.

Testing Is Exploration and Learning

Testing is something that we do with the motivation of finding new information. Testing is a process of exploration, discovery, investigation, and learning. When we configure, operate, and observe a product with the intention of evaluating it, or with the intention of recognizing a problem that we hadn’t anticipated, we’re testing. We’re testing when we’re trying to find out about the extents and limitations of the product and its design, and when we’re largely driven by questions that haven’t been answered or even asked before.  testing is focused on “learning sufficiently everything that matters about how the program works and about how it might not work.”

Top reasons given by testers when bugs are reported by clients

Here is a list of reasons given by testers when they do not report or even find (important) bugs.

I found this bug BUT...
1. It was not approved for submission (by the test lead/ test manager/ fellow testers/ programmer).
2. The bug report was rejected. (never mind the reason of rejection!)
3. This bug is reported but as part of another bug report which is still open.
4. Did not report it because it is intermittent in nature.
5. Reported it verbally due to lack of time.
6. So many bugs are still open. It would not have made sense to report yet another bug.

I did not find this bug BECAUSE...

7. I was not informed that this functionality is complete (and to be tested).
8. This bug is only visible with negative testing and all our test cases are positive test cases.
9. There is no existing test case to find this bug.
10. The test case to find this bug is not in our test plan.
11. This bug can be found only by the client's test cases which we do not have.
12. This functionality was blocked during my test.
13. I have tested this module briefly (I was just assigned this module OR this module was re-assigned to another tester quite early).
14. I have been busy re-testing the numerous bug fixes.
15. They stopped the testing before I had the time to test this.
16. It worked fine the last time I tested it. They must have changed the application after that.
17. It worked fine with the test data that I used.
18. This bug is related to the latest changes in the requirements, about which I was not informed.
19. This bug is specific to the client's environment.
20. If you examine it carefully, it is not really a bug.

Don't worry; we have all used excuses one time or the other. By the way, did you note the similarity to the top replies given by programmers when their applications do not work correctly?
When you lead a team of testers, you should watch out for these remarks and make your team culture and test process robust enough to prevent these problems from occurring.

Related Posts Plugin for WordPress, Blogger...

 
Powered by Blogger