Software Testing

Learn Software Testing skills.

Data Structures and Algorithms

Learn how to store and manipulate data efficiently using data structures and algorithms.

Programming

Learn the basics of coding and object oriented programming.

Saturday, December 26, 2020

Acceptance testing

 
Acceptance testing, like system testing, typically focuses on the behavior and capabilities of a whole system or product. Objectives of acceptance testing include:
  • Establishing confidence in the quality of the system as a whole
  • Validating that the system is complete and will work as expected
  • Verifying that functional and non-functional behaviors of the system are as specified

Acceptance testing may produce information to assess the system’s readiness for deployment and use by the customer (end-user). Defects may be found during acceptance testing, but finding defects is often not an objective, and finding a significant number of defects during acceptance testing may in some cases be considered a major project risk. Acceptance testing may also satisfy legal or regulatory requirements or standards.

Common forms of acceptance testing include the following:

  • User acceptance testing
  • Operational acceptance testing
  • Contractual and regulatory acceptance testing
  • Alpha and beta testing.

User acceptance testing (UAT)

User acceptance testing of the system is typically focused on validating the fitness for use of the system
by intended users in a real or simulated operational environment. The main objective is building confidence that the users can use the system to meet their needs, fulfill requirements, and perform business processes with minimum difficulty, cost, and risk.

Operational acceptance testing (OAT)

The acceptance testing of the system by operations or systems administration staff is usually performed
in a (simulated) production environment. The tests focus on operational aspects, and may include:

  • Testing of backup and restore
  • Installing, uninstalling and upgrading
  • Disaster recovery
  • User management
  • Maintenance tasks 
  • Data load and migration tasks 
  • Checks for security vulnerabilities 
  • Performance testing

The main objective of operational acceptance testing is building confidence that the operators or system
administrators can keep the system working properly for the users in the operational environment, even
under exceptional or difficult conditions.

Contractual and regulatory acceptance testing

Contractual acceptance testing is performed against a contract’s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the parties agree to the contract. Contractual acceptance testing is often performed by users or by independent testers.
Regulatory acceptance testing is performed against any regulations that must be adhered to, such as government, legal, or safety regulations. Regulatory acceptance testing is often performed by users or by independent testers, sometimes with the results being witnessed or audited by regulatory agencies.
The main objective of contractual and regulatory acceptance testing is building confidence that contractual or regulatory compliance has been achieved.

Alpha and beta testing

Alpha and beta testing are typically used by developers of commercial off-the-shelf (COTS) software who want to get feedback from potential or existing users, customers, and/or operators before the software product is put on the market. 

Alpha testing is performed at the developing organization’s site, not by the development team, but by potential or existing customers, and/or operators or an independent test team.

Beta testing is performed by potential or existing customers, and/or operators at their own locations. Beta testing may come after alpha testing, or may occur without any preceding alpha testing having occurred.

One objective of alpha and beta testing is building confidence among potential or existing customers, and/or operators that they can use the system under normal, everyday conditions, and in the operational environment(s) to achieve their objectives with minimum difficulty, cost, and risk. Another objective may be the detection of defects related to the conditions and environment(s) in which the system will be used, especially when those conditions and environment(s) are difficult to replicate by the development team.

Test basis

Work products that can be used as a test basis for any form of acceptance testing include:

  • Business processes
  • User or business requirements
  • Regulations, legal contracts and standards
  • Use cases and/or user stories
  • System requirements
  • System or user documentation
  • Installation procedures
  • Risk analysis reports

In addition, as a test basis for deriving test cases for operational acceptance testing, one or more of the
following work products can be used:

  • Backup and restore procedures
  • Disaster recovery procedures
  • Non-functional requirements
  • Operations documentation
  • Deployment and installation instructions
  • Performance targets
  • Database packages
  • Security standards or regulations

Typical test objects

  • System under test
  • System configuration and configuration data
  • Business processes for a fully integrated system
  • Recovery systems and hot sites (for business continuity and disaster recovery testing)
  • Operational and maintenance processes
  • Forms
  • Reports
  • Existing and converted production data

Typical defects and failures

  • System workflows do not meet business or user requirements
  • Business rules are not implemented correctly
  • System does not satisfy contractual or regulatory requirements
  • Non-functional failures such as security vulnerabilities, inadequate performance efficiency under
  • high loads, or improper operation on a supported platform

Specific approaches and responsibilities

Acceptance testing is often the responsibility of the customers, business users, product owners, or operators of a system, and other stakeholders may be involved as well.
Acceptance testing is often thought of as the last test level in a sequential development lifecycle, but it may also occur at other times, for example:

  • Acceptance testing of a COTS software product may occur when it is installed or integrated
  • Acceptance testing of a new functional enhancement may occur before system testing

In iterative development, project teams can employ various forms of acceptance testing during and at the end of each iteration, such as those focused on verifying a new feature against its acceptance criteria and those focused on validating that a new feature satisfies the users’ needs. In addition, alpha tests and beta tests may occur, either at the end of each iteration, after the completion of each iteration, or after a series of iterations. User acceptance tests, operational acceptance tests, regulatory acceptance tests, and contractual acceptance tests also may occur, either at the close of each iteration, after the completion of each iteration, or after a series of iterations.

Share:

Saturday, December 12, 2020

Test levels

Test levels are groups of test activities that are organized and managed together. Each test level is an instance of the test process, performed in relation to software at a given level of development, from individual units or components to complete systems or, where applicable, systems of systems. Test levels are related to other activities within the software development lifecycle. 

 The test levels are:

  • Component testing: searches for defects in and verifies the functioning of software components (e.g. modules, programs, objects, classes etc.) that are separately testable;
  • Integration testing: tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hard ware or interfaces between systems;
  • System testing: concerned with the behavior of the whole system/product as defined by the scope of a development project or product. The main focus of system testing is verification against specified requirements;
  • Acceptance testing: validation testing with respect to user needs, requirements, and business processes conducted to determine whether or not to accept the system.

Test levels are characterized by the following attributes:

  • Specific objectives
  • Test basis, referenced to derive test cases
  • Test object (i.e., what is being tested)
  • Typical defects and failures
  • Specific approaches and responsibilities

For every test level, a suitable test environment is required. In acceptance testing, for example, a production-like test environment is ideal, while in component testing the developers typically use their own development environment.

Share:

System testing


System testing focuses on the behavior and capabilities of a whole system or product, often considering

the end-to-end tasks the system can perform and the non-functional behaviors it exhibits while performing those tasks. Objectives of system testing include:
  • Reducing risk
  • Verifying whether the functional and non-functional behaviors of the system are as designed and specified
  • Validating that the system is complete and will work as expected
  • Building confidence in the quality of the system as a whole
  • Finding defects
  • Preventing defects from escaping to higher test levels or production

In some cases automated system regression tests provide confidence that changes have not broken existing features or end-to-end capabilities. System testing often produces information that is used by stakeholders to make release decisions. System testing may also satisfy legal or regulatory requirements or standards.

The test environment should ideally correspond to the final target or production environment.

Test basis

  • System and software requirement specifications (functional and non-functional)
  • Risk analysis reports
  • Use cases
  • Epics and user stories
  • Models of system behavior
  • State diagrams
  • System and user manuals

Test objects

  • Applications
  • Hardware/software systems
  • Operating systems
  • System under test (SUT)
  • System configuration and configuration data

Typical defects and failures

  • Incorrect calculations
  • Incorrect or unexpected system functional or non-functional behavior
  • Incorrect control and/or data flows within the system
  • Failure to properly and completely carry out end-to-end functional tasks
  • Failure of the system to work properly in the system environment(s)
  • Failure of the system to work as described in system and user manuals

Specific approaches and responsibilities

System testing should focus on the overall, end-to-end behavior of the system as a whole, both functional and non-functional. Testers may also need to deal with incomplete or undocumented requirements. System testing of functional requirements starts by using the most appropriate specification-based (black-box) techniques for the aspect of the system to be tested. For example, a decision table may be created for combinations of effects described in business rules. Structure-based (white-box) techniques may also be used to assess the thoroughness of testing elements such as menu dialog structure or web page navigation.

System testing is typically carried out by independent testers who rely heavily on specifications. Defects in specifications (e.g., missing user stories, incorrectly stated business requirements, etc.) can lead to a lack of understanding of, or disagreements about, expected system behavior. Such situations can cause false positives and false negatives, which waste time and reduce defect detection effectiveness, respectively. Early involvement of testers in user story refinement or static testing activities, such as reviews, helps to reduce the incidence of such situations.

Share:

Integration testing

 
Integration testing tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems. Objectives of integration testing include:
  • Reducing risk
  • Verifying whether the functional and non-functional behaviors of the interfaces are as designed and specified
  • Building confidence in the quality of the interfaces
  • Finding defects (which may be in the interfaces themselves or within the components or systems)
  • Preventing defects from escaping to higher test levels

In some cases automated integration regression tests provide confidence that changes have not broken existing interfaces, components, or systems.

There are two different levels of integration testing, which may be carried out on test objects of varying size as follows:

  • Component integration testing focuses on the interactions and interfaces between integrated components. Component integration testing is performed after component testing, and is generally automated. In iterative and incremental development, component integration tests are usually part of the continuous integration process. 
  • System integration testing focuses on the interactions and interfaces between systems, packages, and microservices. System integration testing can also cover interactions with, and interfaces provided by, external organizations (e.g., web services). In this case, the developing organization does not control the external interfaces, which can create various challenges for testing (e.g., ensuring that test-blocking defects in the external organization’s code are resolved, arranging for test environments, etc.). System integration testing may be done after system testing or in parallel with ongoing system test activities (in both sequential development and iterative and incremental development).

Test basis

  • Software and system design
  • Sequence diagrams
  • Interface and communication protocol specifications
  • Use cases
  • Architecture at component or system level
  • Workflows
  • External interface definitions

Test objects

  • Subsystems
  • Databases
  • Infrastructure
  • Interfaces
  • APIs
  • Microservices

Typical defects and failures

Typical defects and failures for component integration testing include:

  • Incorrect data, missing data, or incorrect data encoding
  • Incorrect sequencing or timing of interface calls
  • Interface mismatch
  • Failures in communication between components
  • Unhandled or improperly handled communication failures between components
  • Incorrect assumptions about the meaning, units, or boundaries of the data being passed between components
Typical defects and failures for system integration testing include:
  • Inconsistent message structures between systems
  • Incorrect data, missing data, or incorrect data encoding
  • Interface mismatch
  • Failures in communication between systems
  • Unhandled or improperly handled communication failures between systems 
  • Incorrect assumptions about the meaning, units, or boundaries of the data being passed between systems
  • Failure to comply with mandatory security regulations

Specific approaches and responsibilities

Component integration tests and system integration tests should concentrate on the integration itself. For example, if integrating module A with module B, tests should focus on the communication between the modules, not the functionality of the individual modules, as that should have been covered during component testing. If integrating system X with system Y, tests should focus on the communication between the systems, not the functionality of the individual systems, as that should have been covered during system testing. Functional, non-functional, and structural test types are applicable.

Component integration testing is often the responsibility of developers. System integration testing is generally the responsibility of testers. Ideally, testers performing system integration testing should understand the system architecture, and should have influenced integration planning.

Big-bang testing is one extreme is that all components or systems are integrated simultaneously, after which everything is tested as a whole. Big-bang testing has the advantage that everything is finished before integration testing starts. There is no need to simulate (as yet unfinished) parts. The major disadvantage is that in general it is time-consuming and difficult to trace the cause of failures with this late integration.  

Incremental testing is another extreme is that all programs are integrated one by one, and a test is carried out after each step. The incremental approach has the advantage that the defects are found early in a smaller assembly when it is relatively easy to detect the cause. A disadvantage is that it can be time-consuming since stubs and drivers have to be developed and used in the test. Within incremental integration testing a range of possibilities exist, partly depending on the system architecture:

  • Top-down: testing takes place from top to bottom, following the control flow or architectural structure (e.g. starting from the GUI or main menu). Components or systems are substituted by stubs.
  • Bottom-up: testing takes place from the bottom of the control flow upwards. Components or systems are substituted by drivers.
  • Functional incremental: integration and testing takes place on the basis of the functions or functionality, as documented in the functional specification.

The preferred integration sequence and the number of integration steps required depend on the location in the architecture of the high-risk interfaces. The best choice is to start integration with those interfaces that are expected to cause most problems. Doing so prevents major defects at the end of the integration test stage. In order to reduce the risk of late defect discovery, integration should normally be incremental rather than 'big-bang'. If integration tests are planned before components or systems are built, they can be developed in the order required for most efficient testing.

The greater the scope of integration, the more difficult it becomes to isolate defects to a specific component or system, which may lead to increased risk and additional time for troubleshooting. This is one reason that continuous integration, where software is integrated on a component-by-component basis (i.e., functional integration), has become common practice. Such continuous integration often includes automated regression testing, ideally at multiple test levels.

Share:

Component testing

 
Component testing (also known as unit or module testing) searches for defects in and verifies the functioning of software modules, programs, objects, classes, etc. that are separately testable. Objectives of component testing include:
  • Reducing risk
  • Verifying whether the functional and non-functional behaviors of the component are as designed
  • and specified
  • Building confidence in the component’s quality
  • Finding defects in the component
  • Preventing defects from escaping to higher test levels

In some cases, especially in incremental and iterative development models (e.g., Agile) where code changes are ongoing, automated component regression tests play a key role in building confidence that changes have not broken existing components.

Component testing is often done in isolation from the rest of the system, depending on the software development lifecycle model and the system, which may require mock objects, service virtualization, harnesses, stubs, and drivers. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.

 

Component testing may include testing of functionality (e.g., correctness of calculations) and specific nonfunctional characteristics such as resource-behavior (e.g. memory leaks), performance or robustness testing, as well as structural testing (e.g. decision coverage)

Test basis

  • Detailed design
  • Code
  • Data model
  • Component specifications

Test objects

  • Components, units or modules
  • Code and data structures
  • Classes
  • Database modules

Typical defects and failures

  • Incorrect functionality (e.g., not as described in design specifications) 
  • Data flow problems 
  • Incorrect code and log

Specific approaches and responsibilities

Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test framework or debugging tool, and in practice usually involves the programmer who wrote the code. Sometimes, depending on the applicable level of risk, component testing is carried out by a different programmer thereby introducing independence. Defects are typically fixed as soon as they are found, without formally recording the incidents found.

However, in Agile development especially, writing automated component test cases may precede writing application code. For example, consider test driven development (TDD). Test driven development is highly iterative and is based on cycles of developing automated test cases, then building and integrating small pieces of code, then executing the component tests, correcting any issues, and re-factoring the code. This process continues until the component has been completely built and all component tests are passing.
Share:

Test process

 

The fundamental test process consists of the following main activities:

Test planning

Test planning is the activity of defining the objectives of testing & the specification of test activities in order to meet the objectives and mission

Test planning has the following major tasks, given approximately in order, which help us build a test plan:

  • Determine the scope and risks and identify the objectives of testing
  • Determine the test approach  
  • Implement the test policy and/or the test strategy
  • Determine the required test resources
  • Schedule test analysis and design tasks, test implementation, execution and evaluation
  • Determine the exit criteria

Test monitoring and control

Test monitoring involves the on-going comparison of actual progress against planned progress using any test monitoring metrics defined in the test plan. 

Test control involves taking actions necessary to meet the objectives of the test plan (which may be updated over time). 

Test monitoring and control are supported by the evaluation of exit criteria, which are referred to as the definition of done in some software development life-cycle models. 

Test monitoring and control has the following major tasks:

  • Measure and analyze the results of reviews and testing
  • Monitor and document progress, test coverage and exit criteria
  • Provide information on testing
  • Initiate corrective actions 
  • Make decisions

Test analysis

During test analysis, the test basis is analyzed to identify testable features and define associated test
conditions.
 

Test analysis includes the following major activities:

  • Analyzing the test basis appropriate to the test level being considered, for example:

o Requirement specifications, such as business requirements, user stories, use cases
o Design and implementation information, such as system or software architecture diagrams or documents, design specifications, call flow graphs, modelling diagrams (e.g., UML or entity-relationship diagrams), interface specifications
o The implementation of the component or system itself, including code, database metadata and queries, and interfaces
o Risk analysis reports, which may consider functional, non-functional, and structural aspects of the component or system

  • Evaluating the test basis and test items to identify defects of various types, such as ambiguities, omissions, inconsistencies, inaccuracies, contradictions, superfluous statements  
  • Identifying features and sets of features to be tested 
  • Defining and prioritizing test conditions for each feature based on analysis of the test basis, and
    considering functional, non-functional, and structural characteristics, other business and technical
    factors, and levels of risks 
  • Capturing bi-directional traceability between each element of the test basis and the associated
    test conditions

The application of black-box, white-box, and experience-based test techniques can be useful in the
process of test analysis to reduce the likelihood of omitting important test conditions and to define more precise and accurate test conditions.

In some cases, test analysis produces test conditions which are to be used as test objectives in test
charters. When these test objectives are traceable to the test basis, coverage achieved during such experience-based testing can be measured.

The identification of defects during test analysis is an important potential benefit, especially where no
other review process is being used and/or the test process is closely connected with the review process.
Such test analysis activities not only verify whether the requirements are consistent, properly expressed,
and complete, but also validate whether the requirements properly capture customer, user, and other
stakeholder needs.
 

Test design 

During test design, the test conditions are elaborated into high-level test cases, sets of high-level test
cases, and other testware. So, test analysis answers the question “what to test?” while test design answers the question “how to test?”

Test design includes the following major activities:
  • Designing and prioritizing test cases and sets of test cases
  • Identifying necessary test data to support test conditions and test cases
  • Designing the test environment and identifying any required infrastructure and tools
  • Capturing bi-directional traceability between the test basis, test conditions, and test cases

Test implementation

During test implementation, the testware necessary for test execution is created and/or completed,
including sequencing the test cases into test procedures. So, test design answers the question “how to
test?” while test implementation answers the question “do we now have everything in place to run the
tests?”

Test implementation includes the following major activities:

  • Developing and prioritizing test procedures, and, potentially, creating automated test scripts
  • Creating test suites from the test procedures and (if any) automated test scripts
  • Arranging the test suites within a test execution schedule in a way that results in efficient test execution  
  • Building the test environment and verifying that everything needed has been set up correctly
  • Preparing test data and ensuring it is properly loaded in the test environment
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test suites

Test execution

Test execution includes the following major activities:

  • Recording the IDs and versions of the test item(s) or test object, test tool(s), and testware
  • Executing tests either manually or by using test execution tools
  • Comparing actual results with expected results
  • Analyzing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur)
  • Reporting defects based on the failures observed
  • Logging the outcome of test execution (e.g., pass, fail, blocked) 
  • Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing (e.g., execution of a corrected test, confirmation testing, and/or regression testing) 
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results.

Test completion 

Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled), an Agile project iteration is finished, a test level is completed, or a maintenance release has been completed.

Test completion includes the following major activities:

  • Checking whether all defect reports are closed, entering change requests or product backlog items for any defects that remain unresolved at the end of test execution
  • Creating a test summary report to be communicated to stakeholders
  • Finalizing and archiving the test environment, the test data, the test infrastructure, and other testware for later reuse
  • Handing over the testware to the maintenance teams, other project teams, and/or other stakeholders who could benefit from its use
  • Analyzing lessons learned from the completed test activities to determine changes needed for future iterations, releases, and projects
  • Using the information gathered to improve test process maturity
Share:

Thursday, December 10, 2020

Continuous Integration

A CI system allows the automated compilation, integration, testing and deployment of the entire code base in a project. 

Delivery of a product increment requires reliable, working, integrated software at the end of every
sprint. Continuous integration addresses this challenge by merging all changes made to the software
and integrating all changed components regularly, at least once a day. Configuration management,
compilation, software build, deployment, and testing are wrapped into a single, automated, repeatable
process. 

Following the developers’ coding, debugging, and check-in of code into a shared source code repository, a continuous integration process consists of the following automated activities: 

  • Static code analysis: executing static code analysis and reporting results
  • Compile: compiling and linking the code, generating the executable files
  • Unit test: executing the unit tests, checking code coverage and reporting test results
  • Deploy: installing the build into a test environment
  • Integration test: executing the integration tests and reporting results
  • Report (dashboard): posting the status of all these activities to a publicly visible location or emailing status to the team

Continuous integration can provide the following benefits:

  • Allows earlier detection and easier root cause analysis of integration problems and conflicting changes
  • Gives the development team regular feedback on whether the code is working
  • Keeps the version of the software being tested within a day of the version being developed
  • Reduces regression risk associated with developer code refactoring due to rapid re-testing of the code base after each small set of changes
  • Provides confidence that each day’s development work is based on a solid foundation
  • Makes progress toward the completion of the product increment visible, encouraging developers and testers
  • Eliminates the schedule risks associated with big-bang integration
  • Provides constant availability of executable software throughout the sprint for testing, demonstration, or education purposes
  • Reduces repetitive manual testing activities
  • Provides quick feedback on decisions made to improve quality and tests

However, continuous integration is not without its risks and challenges:

  • Continuous integration tools have to be introduced and maintained
  • The continuous integration process must be defined and established
  • Test automation requires additional resources and can be complex to establish. Data sets have to be created, people trained to use the tools and time must be spent to create, for example, a keyword test framework. Also, the test results analysis can be time-consuming if there are too many test scripts. 
  • Well-specified test cases are required to achieve automated testing advantages. This will increase the time needed for testing tasks.
  • Thorough test coverage is essential to achieve automated testing advantages
  • Teams sometimes over-rely on unit tests and perform too little system and acceptance testing. The test strategy must plan how upper level tests, such as system or end-to-end tests, will be executed

Continuous integration requires the use of tools, including tools for testing, tools for automating the
build process, and tools for version control.

Share:

Wednesday, December 9, 2020

Traditional vs Agile Testing

Traditional Testing

With Waterfall model, testing tends to happen towards the end of the project life cycle so defects are detected close to the live implementation date. 

The V-model was developed to address some of the problems experienced using the traditional waterfall approach. The V-model provides guidance that testing needs to begin as early as possible in the life cycle, which is one of the fundamental principles of structured testing. There are a variety of activities that need to be performed before the end of the coding phase. These activities should be carried out in parallel with development activities, and testers need to work with developers and business analysts so they can perform these activities and tasks and produce a set of test deliverables. By starting test design early, defects are often found in the test basis documents.

Agile Testing

• It promotes the generation of business stories to define the functionality.
• It demands an on-site customer for continual feedback and to define and carry out functional acceptance testing.
• It promotes pair programming and shared code ownership amongst the developers.
• It states that component test scripts shall be written before the code is written and that those tests should be automated.
• It states that integration and testing of the code shall happen several times a day.
• It states that we always implement the simplest solution to meet today's problems

Agile testing with its focus on business value and delivering the quality customers require, is different from traditional testing which focuses on conformance to requirements. 

Share:

Translate