Testing is an essential part of the life cycle of any computer program.
Why is it Important to Test?
Some people see the cost of testing as an expense that affects their product’s profitability. It is wise to instead, think of testing as an investment – one that will result in more secure software with fewer bugs; one that will result in happier end-users. Some pitfalls of not properly testing include:
- Monetary penalties. Lost customers, lawsuits, fines and penalties due to non-compliance with laws and industry regulations all fall into this category.
- Loss of time can be caused by transactions taking a long time to process, but can also include downtime and loss of productivity if people are not able to complete their jobs due to a failure of your software.
- Damage to business reputation occurs when your software does not work properly, and an organization is unable to provide service to their customers as a result. This often results in lost business and, therefore, lost revenue.
We understand that a lot of managers see software testing as an expense and not an investment. The best way to combat this is to save money on your testing while still providing high-quality tests, performed by real humans who have incentives to try their best to “break” your application.
To do this, we use Pay4Bugs, which is an online software testing marketplace where you set your own prices and see who will work for you across the world at incredibly inexpensive rates!
The failure of safety-critical systems may result in injury or death.
What are Your Goals?
- Quality Assurance
You know what your program requires as input. You can usually predict (based on that input) how the program should behave, and what its output should be. - Verification that the program matches the specification laid out by the client(s) and project managers.
- Security
There are many ways a program or Web site can be attacked. Many of these attacks can be mitigated or completely avoided; for example, SQL injections in a web application, or
buffer overflows in a C or C++ desktop application. Both types of vulnerability are common and can result in disaster if not found and corrected — but both types of vulnerability are also
very easy to correct (or prevent). - Usability
Testing isn’t just about making sure your program does not crash. Testing is also about making sure the program behaves the way users expect it to behave. Your users will want
a program that is easy to use. “Easy,” in this context, means intuitive to the user (not you). - Tracebility
A reasonably complex project may have hundreds or thousands of requirements, and you need to have some way to know that you have implemented all of them. You also need
to track the progress of each requirement and prove that it has been satisfied.
Test Early, Test Often
Although traditionally, testing has occurred at the end of the software development cycle, it is unwise to do so on any except the smallest projects.Testing should be performed on small, manageable chunks of code, and should be started as early as possible.
White-Box vs. Black-Box Testing
These two terms describe techniques used to test software. White-box testing is performed by people who are familiar with the code and have access to the code and the developers who wrote it. Testers doing white-box testing need to either be familiar with software development, or be given ample time to interview the developers themselves.
Black box testing is used by people who never look at the code. The emphasis and purpose of black-box testing is to judge whether all of the requirements and specifications have been met. A combination of both techniques is essential for a fully-rounded testing program.
Types of Software Testing
Software testing consists of several subcategories, each of which is done for different purposes, and often using different techniques. Software testing categories include:
- Functionality testing to verify the proper functionality of the software, including validation of system and business requirements, validation of formulas and calculations, as well as testing of user interface functionality.
- Forced error testing, or attempting to break and fix the software during testing so that customers do not break it in production.
- Compatibility testing to ensure that software is compatible with various hardware platforms, operating systems, other software packages, and even previous releases of the same software.
- Performance testing to see how well software performs in terms of the speed of computations and responsiveness to the end-user.
- Scalability testing to ensure that the software will function well as the number of users and size of databases increase.
- Stress testing to see how the system performs under extreme conditions, such as a very large number of simultaneous users.
- Usability testing.
- Application security testing to make sure that valuable and sensitive data cannot be accessed inappropriately or compromised under concerted attack.
Manual Software Testing vs. Automated Software Testing
Certain types of testing can be done without human intervention. Other types can’t effectively be done in an automated manner and should use manual testing:
- Usability testing – Ease-of-use is subjective; measuring it with an automated process is difficult at best.
- Exploratory/ad hoc testing (where testers do not follow a ‘script’, but rather testers ‘explore’ the application and use their instincts to find bugs) – because ou are not doing the test exactly the same way each time.
- Testing areas of the application which experience a lot of change.
Automated testing is best used for tests which are explicit and repetitive:
- General QA and functionality tests (i.e. does each module do what the requirements say it should? How does the application respond to incorrect inputs?)
- ‘End to end’ scenario tests (simulating a ‘real world’ use of the software in a production environment)
- Performance, load, and stress testing
Most tests are automated, like unit tests. Mathematically-oriented folks might suspect that unit tests refer to unit conversions (like centimeters to inches), and business-oriented people might think of inventory; but the term describes a technique that involves building small, individual tests for each little piece of functionality. For example, a unit test might be as simple as checking to see that, when the configuration is saved to a file, it really was saved to a file of the proper format that can be properly opened by the program. (In fact, this scenario is complex enough that some might actually break this into multiple unit tests.)
Acceptance testing is completely different. Acceptance tests may be written by the end user or customer. Even when they’re not programmed by the customer, they are actually designed by the customer, typically by having a test engineer sit down with the customer and work them through the process of how to test to ensure that the program does everything it is required to do. After working with the customer to create an incredibly detailed walk-through, the test engineers can then turn around and design automated tests around these functions, and make sure that everything the customer will try to do is possible in the software, and works.
Software Testing Milestones
A program that has hit its Alpha milestone is complete enough that the first round of end-to-end system testing can start. The program is not “ready for prime time,” since, often, the UI is not complete and many bugs may still exist.
By contrast, a program that has hit its Beta milestone has a completed UI; remaining work focuses on continuing to fix bugs, improving performance and enhancing usability.
Typically, end-users start testing the program when it hits beta, although some programmers release alpha versions of their programs, witih the caveat that since the programs are alpha, they will contain bugs and may not be usable in a production environment.
Regression Software Testing
This fancy-sounding phrase has a very simple description. Say you’ve already found Bug A, Bug B and Bug C. You’ve fixed them. Now you’re working on Bugs D and E. Regression testing simply ensures that the fixes for D and E don’t cause any of the earlier bugs to reappear.
Regression tests are performed as a group; the new tests are performed together with the older tests.
Acceptance Testing
Acceptance testing is all about giving the project to the client so the client’s users can test it.
In theory, Acceptance Testing should also be quick and relatively painless; plenty of testing should already have been done. Earlier tests should have eliminated bugs and usability issues, making
acceptance testing a simple formality.
acceptance testing a simple formality.
Acceptance testing encompasses other artefacts too, not just the software. This phase of software development may include updating manuals and documentation; updating processes and operational procedures; training end-users and measuring operational performance against Service-Level Agreements.
Goal Testing
Goal testing ensures that code executes in less time than any specified Service Level Agreement. Goal testing is performed in both single-user and multiple-user environments. When committing to an SLA, the management team needs to commit to only those sections of the system they control.
A good SLA includes lots of detail: “Transaction X will complete within Y milliseconds with Z number of simultaneous users.” A bad SLA includes little detail: “Screen X will complete within Y seconds.” The user seeing screen X could be sitting next to the server, on the same LAN, in which case the SLA will probably be met; or the user could be sitting out in the middle of nowhere on a dialup connection, in which case the SLA is not likely to be met.
0 comments:
Post a Comment