Some times ago testing was not a specialized practice like any other industries. Professionals used to do less amount of testing in an informal way. Later years organizations understood the importance of testing and started treating it as a prime part in making their product by giving enough time for testing and following some procedural methods with documentation.
Here I am going to explain entire "Software Testing Life Cycle" process in brief
Software Testing Life Cycle - Presentation Transcript
1.Software Testing Life Cycle Designed and compiled by: balaji
2.STLC - Definition
The course of software being tested in a well-planned way is known as Software test life cycle.
Contract Signing, Requirement Analysis, Test Planning, Test Development, Test Execution, Defect Reporting, Retest Defects, Product Delivery are various stages of STLC.
***********************3 .STLC – Stages Involved************************************
3.1.Contract Signing:
Process - The project is signed with client for testing the software
Documents involved:
SRS(software requirement specifications)
Test Deliverables
Test Metrics etc.
3.2.Requirement Analysis:
Process: Analyzing software for design and implementation methods and testable aspects are recorded .
Documents involved:
Requirement Specification documents
Functional Specification documents
Design Specification documents (use cases, etc)
Use case Documents
Test Trace-ability Matrix for identifying Test Coverage
3.3.Test Planning:
Process: To plan, how the testing process should flow.
Test Process Flow
Test Scope, Test Environment
Different Test phase and Test Methodologies
Manual and Automation Testing
Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc
Evaluation & identification – Test, Defect tracking tools
Documents Involved:
Master Test Plan, Test Scenario, SCM
3.4.Test Development:
Process:
Test Traceability Matrix and Test coverage
Test Scenarios Identification & Test Case preparation
Test data and Test scripts preparation
Test case reviews and Approval
Base lining under Configuration Management
Documents Involved:
Test Plan, RTM
Test cases
3.5.Test Execution:
Process:
Executing Test cases
Testing Test Scripts
Capture, review and analyze Test Results
Raising the defects and tracking for its closure
Documents Involved:
Test Cases
Test Execution report
Bug report
Requirement traceability matrix
3.6.Defect Reporting :
Process:
Defect logging
Assigning defect and fixing
Retesting
Defect closing
Documents involved:
Test report
Bug Report
3.7.Product Delivery :
Process:
After the product had undergone several tests, the acceptance test is done by the user/client i.e. UAT, wherein the use cases were executed and the product is accepted to go on live.
Test Metrics and process Improvements made
Build release
Receiving acceptance
Documents involved :
Test summary reports
UAT Test Plan, UAT Test cases
*********************************************************************************
*******************************4.TEST PLAN*******************************
4.1.Test Plan – What?
Derived from Test Approach, Requirements, Project Plan, Functional Spec., and Design Spec
Details out project-specific Test Approach
Lists general (high level) Test Case areas
Include testing Risk Assessment
Include preliminary Test Schedule
Lists Resource requirements
4.2.Test Plan – Why?
Identify Risks and Assumptions up front to reduce surprises later. Communicate objectives to all team members. Foundation for Test Spec, Test Cases, and ultimately the Bugs we find.
Failing to plan = planning to fail.
4.3.Test Plan – Definition
The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans, the individual test levels are carried out.
4.4.Test Plan – Consists of…
Unit Testing Tools
Required tool to test at unit level
Priority of Program units
Module-wise priority
Naming convention for test cases
Status reporting mechanism
Regression test approach
ETVX Criteria
E ntry means the entry point to that phase.
for example, for unit testing, the coding must be complete and then only one can start unit testing
T ask is the activity that is performed
V alidation is the way in which the progress and correctness and compliance are verified for that phase
E x it tells the completion criteria of that phase, after the validation is done.
for example, the exit criterion for unit testing is all unit test cases must pass
.................more
*********************************************************************************
5.Risk Analysis ::
A risk is a potential for loss or damage to an Organization from materialized threats. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks .
Risk Identification
>>Software Risks
>>Business Risks
>>Testing Risks
>>Premature Release Risk
>>Risk Methods
5.1.Software Risks
Knowledge of the most common risks associated with Software development, and the platform you are working on.
5.2.Business Risks
Most common risks associated with the business using the Software.
5.3.Testing Risks
Knowledge of the most common risks associated with Software Testing for the platform you are working on, tools being used, and test methods being applied
5.4.Premature Release Risk
Ability to determine the risk associated with releasing unsatisfactory or untested Software Products
Risk Methods
Strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products and process; assessing their likelihood, and initiating strategies to test those risks
*********************************6.Test Execution ***********************************
6.1.Software Testing Fundamentals
>Testing is a process of executing a program with the intent of finding an error
>A good test case is one that has a high probability of finding an as yet undiscovered error
>A successful test is one that uncovers an as yet undiscovered error
Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications
When Testing should start?
Testing early in the life cycle reduces the errors. Test deliverables are associated with every phase of development. The goal of Software Tester is to find bugs, find them as early as possible, and make them sure they are fixed. The number one cause of Software bugs is the Specification
The next largest source of bugs is the Design
When to Stop Testing?
Some reasons to stop test are:
Deadlines (release deadlines, testing deadlines.)
Test cases completed with certain percentages passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point
The rate at which Bugs can be found is too small
Beta or Alpha Testing period ends
The risk in the project is under acceptable limit
This can be difficult to determine.
Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done.
6.2.Test Execution ::
Testing of an application includes:
Unit Testing
Integration testing
System Testing
Acceptance testing
These are the functional testing strategies and few other functional, non-functional, performance and other testing methods can also be applied on the software.
6.2.1.Test Execution – Unit testing
The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual testers
Basic input/output of the units along with their basic functionality will be tested
input units will be tested for the format, alignment, accuracy and the totals
The UTP will clearly give the rules of what data types are present in the system, their format and their boundary conditions
Testing the screens, files, database etc., are to be given in proper sequence......... more
6.2.2.Test Execution – Integration testing
The integration test plan is the overall plan for carrying out the activities in the integration test level
This section clearly specifies the kinds of interfaces fall under the scope of testing internal, external interfaces, with request and response is to be explained
Two approaches practiced are Top-Down and Bottom-Up integrations
Given this correctly, the testing activities will lead to the product, slowly building the product, unit by unit and then integrating them........... more
6.2.3.Test Execution – System testing
The system test plan is the overall plan carrying out the system test level activities
System testing is based on the requirements
All requirements are to be verified in the scope of system testing
The requirements can be grouped in terms of the functionality
Based on this, there may be priorities also among the functional groups
Apart from this what special testing is performed are also stated here........... more
6.2.4.Test Execution – Non-functional testing
Non-functional testing includes:
Installation testing – Installation environment, practical obstacles etc............ more
Compatibility testing – compatibility with other system software............ more
Configuration testing - how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software............ more
Security testing - secure from mistaken/accidental users, hackers, and other malevolent attackers......... more
Recovery testing - how well a system recovers from crashes, hardware failures, or other catastrophic problems........... more
Usability testing - Testing for user-friendliness........... more
6.2.5.Test Execution – Performance testing
Performance testing includes:........... more
>>Load Testing – Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation
>>Stress testing - Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity
*********************************************************************************
7.Bug/Defect Management
7.1.BUG LIFE CYCLE - What is a bug?
A software bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from behaving as intended.
what is bug life cycle?
In software testing, the term life cycle refers to the various stages that a defect/bug assumes over its life.
>>BUG LIFE CYCLE
The different stages involved in a bug life cycle are as follows:
1.Finding Bugs
2.Reporting/ Documentation
3.Fixing
4.Retesting
5.Closing .
Stages involved in Bug Life Cycle
1.Finding Bugs:
Software Tester finds bug while testing.
It is then logged and assigned to a programmer to be fixed.
2.Reporting/ Documentation:
In software, bugs need to be tracked and managed to
Communicate bug for reproducibility, resolution, and regression.
Track bug status (open, resolved, closed).
Ensure bug is not forgotten, lost or ignored
3.Fixing:
Once the bug is assigned to the developer, he fixes the bug.
Once the programmer fixes the code , he assigns it back to the tester and the bugs enters the resolved state.
4.Retesting:
The tester then performs a regression test to confirm that the bug is indeed fixed .
5.Closing:
If the bug is fixed, then the tester closes the bug.
Here the bug then enters its final state, the closed state.
7.2.Different status of a Bug
New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.
Open : After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.
Test: Once the developer fixes the bug, he assigns the bug to the testing team for retesting. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases.
Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.
Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.
Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.
Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.
7.3.Severity of a Bug
It indicates the impact each defect has on the testing efforts or users and administrators of the application under test.
This information is used by developers and management as the basis for assigning priority of work on defects.
7.4.Priority Levels of a Bug
Critical :
An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs.
Examples of this include a missing menu option or security permission required to access a function under test.
Major / High :
A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs.
Examples of this include inaccurate calculations; the wrong field being updated, etc
Average / Medium :
The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives.
Examples include matching visual and text links which lead to different end points.
Minor / Low :
Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.
Latent Defects:
7.5.Various Bug tracking tools
The various bug tracking tools available are:
Quality Center® – from HP
Bugzilla® - from Mozilla
Dev Track® – from TechExcel
********************************8.Product Delivery *********************************
following are the Test Deliverables
Test Trace-ability Matrix
Test Plan
Testing Strategy
Test Cases (for functional testing)
Test Scenarios (for non-functional testing)
Test Scripts
Test Data
Test Results
Test Summary Report
Release Notes
Tested Build
8.1.Product Delivery - Test Metrics
Measuring the correctness of the testing process with measurable is known to be test metrics.
There are several test metrics identified as part of the overall testing activity in order to track and measure the entire testing process.
These test metrics are collected at each phase of the testing life cycle/SDLC, analyzed and appropriate process improvements are determined and implemented.
The metrics should be constantly collected and evaluated as a parallel activity together with testing, both for manual and automated testing irrespective of the type of application.
8.2.Product Delivery - Test Metrics - Classification
Project Related Metrics – such as
Test Size,
# of Test Cases tested per day –Automated (NTTA)
# of Test Cases tested per day –Manual (NTTM)
# of Test Cases created per day – Manual (TCED)
Total number of review defects (RD)
Total number of testing defects (TD) etc.
8.3.Product Delivery – Test Metrics – Classification
Process Related Metrics – such as
Schedule Adherence (SA)
Effort Variance (EV)
Schedule Slippage (SS)
Test Cases and Scripts Rework Effort, etc.
Customer related Metrics – such as
Percentage of defects leaked per release (PDLPR)
Percentage of automation per release (PAPR)
Application Stability Index (ASI) etc.
9.Product Delivery – Acceptance testing – UAT
The client at their place performs the acceptance testing. It will be very similar to the system test performed by the Software Development Unit .
There is no specific clue on the way they will carry out the testing, since the client performs this test
It will not differ much from the system testing .
This is just one level of testing done by the client for the overall product and it includes test cases including the unit and integration test level details .
*********************************************************************************
- Boundary Value Analysis (BVA)
- Equivalence Class Partitioning
- Decision Table based testing.
- State Transition
- Error Guessing
Boundary Value Analysis (BVA)
Boundary value analysis is based on testing at the boundaries between partitions. It includes maximum, minimum, inside or outside boundaries, typical values and error values.
It is generally seen that a large number of errors occur at the boundaries of the defined input values rather than the center. It is also known as BVA and gives a selection of test cases which exercise bounding values.
This black box testing technique complements equivalence partitioning. This software testing technique base on the principle that, if a system works well for these particular values then it will work perfectly well for all values which comes between the two boundary values.
Guidelines for Boundary Value analysis
- If an input condition is restricted between values x and y, then the test cases should be designed with values x and y as well as values which are above and below x and y.
- If an input condition is a large number of values, the test case should be developed which need to exercise the minimum and maximum numbers. Here, values above and below the minimum and maximum values are also tested.
- Apply guidelines 1 and 2 to output conditions. It gives an output which reflects the minimum and the maximum values expected. It also tests the below or above values.
Example: Input condition is valid between 1 to 10
Boundary values 0,1,2 and 9,10,11
Equivalence Class Partitioning
Equivalent Class Partitioning allows you to divide set of test condition into a partition which should be considered the same. This software testing method divides the input domain of a program into classes of data from which test cases should be designed.
The concept behind this technique is that test case of a representative value of each class is equal to a test of any other value of the same class. It allows you to Identify valid as well as invalid equivalence classes.
Decision Table Based Testing.
A decision table is also known as to Cause-Effect table. This software testing technique is used for functions which respond to a combination of inputs or events. For example, a submit button should be enabled if the user has entered all required fields.
The first task is to identify functionalities where the output depends on a combination of inputs. If there are large input set of combinations, then divide it into smaller subsets which are helpful for managing a decision table.
For every function, you need to create a table and list down all types of combinations of inputs and its respective outputs. This helps to identify a condition that is overlooked by the tester.
State Transition
In State Transition technique changes in input conditions change the state of the Application Under Test (AUT). This testing technique allows the tester to test the behavior of an AUT. The tester can perform this action by entering various input conditions in a sequence. In State transition technique, the testing team provides positive as well as negative input test values for evaluating the system behavior.
Guideline for State Transition:
- State transition should be used when a testing team is testing the application for a limited set of input values.
- The technique should be used when the testing team wants to test sequence of events which happen in the application under test.
*********************************************************************************
Function Point/Testing Point Analysis
Percentage distribution.
Ad-hoc method- Experience-based testing estimation techniqueUse-Case Point Method
UCP Method is based on the use cases where we calculate the unadjusted actor weights and unadjusted use case weights to determine the software testing estimation.
Use-case is a document which specifies different users, systems or other stakeholders interacting with the concerned application. They are named as “Actors”. The interactions accomplish some defined goals protecting the interest of all stakeholders through different behavior or flow termed as scenarios.
Step 1 − Count the no. of actors. Actors include positive, negative and exceptional.
Step 2 − Calculate unadjusted actor weights as
Unadjusted Actor Weights = Total no. of Actors
Step 3 − Count the number of use-cases.
Step 4 − Calculate unadjusted use-case weights as
Unadjusted Use-Case Weights = Total no. of Use-Cases
Step 5 − Calculate unadjusted use-case points as
Unadjusted Use-Case Points = (Unadjusted Actor Weights + Unadjusted Use-Case Weights)
Step 6 − Determine the technical/environmental factor (TEF). If unavailable, take it as 0.50.
Step 7 − Calculate adjusted use-case point as
Adjusted Use-Case Point = Unadjusted Use-Case Points × [0.65 + (0.01 × TEF]
Step 8 − Calculate total effort as
Total Effort = Adjusted Use-Case Point × 2
Work Breakdown Structure:
Step 1 − Create WBS by breaking down the test project into small pieces.
Step 2 − Divide modules into sub-modules.
Step 3 − Divide sub-modules further into functionalities.
Step 4 − Divide functionalities into sub-functionalities.
Step 5 − Review all the testing requirements to make sure they are added in WBS.
Step 6 − Figure out the number of tasks your team needs to complete.
Step 7 − Estimate the effort for each task.
Step 8 − Estimate the duration of each task.
Wideband Delphi Technique:
In Wideband Delphi Method, WBS is distributed to a team comprising of 3-7 members for re-estimating the tasks. The final estimate is the result of the summarized estimates based on the team consensus.
This method speaks more on experience rather than any statistical formula. This method was popularized by Barry Boehm to emphasize on the group iteration to reach a consensus where the team visualized different aspects of the problems while estimating the test effort.
Function Point / Testing Point Analysis:
FPs indicate the functionality of software application from the user's perspective and is used as a technique to estimate the size of a software project.
In testing, estimation is based on requirement specification document, or on a previously created prototype of the application. To calculate FP for a project, some major components are required. They are −
Unadjusted Data Function Points − i) Internal Files, ii) External Interfaces
Unadjusted Transaction Function Points − i) User Inputs, ii) User Outputs & iii) User Inquiries
Capers Jones basic formula −
Number of Test Cases = (Number of Function Points) × 1.2
Total Actual Effort (TAE) −
(Number of Test cases) × (Percentage of Development Effort /100)
Percentage Distribution:
In this technique, all the phases of Software Development Life Cycle (SDLC) are assigned effort in %. This can be based on past data from similar projects. For example −
Phase | % of Effort |
---|---|
Project Management | 7% |
Requirements | 9% |
Design | 16% |
Coding | 26% |
Testing (all Test Phases) | 27% |
Documentation | 9% |
Installation and Training | 6% |
Next, % of effort for testing (all test phases) is further distributed for all Testing Phases −
All Testing Phases | % of Effort |
---|---|
Component Testing | 16 |
Independent Testing | 84 |
Total | 100 |
Independent Testing | % of Effort |
---|---|
Integration Testing | 24 |
System Testing | 52 |
Acceptance Testing | 24 |
Total | 100 |
System Testing | % of Effort |
---|---|
Functional System Testing | 65 |
Non-functional System Testing | 35 |
Total | 100 |
Test Planning and Design Architecture | 50% |
Review phase | 50% |
Experience-based Testing Estimation Technique:
This technique is based on analogies and experts. The technique assumes that you already tested similar applications in previous projects and collected metrics from those projects. You also collected metrics from previous tests. Take inputs from subject matter experts who know the application (as well as testing) very well and use the metrics you have collected and arrive at the testing effort.
No comments:
Post a Comment