Non-functional Quality Characteristics
Non-functional Characteristic | Sub-characteristics | ||
Performance efficiency | Time-behavior Resource utilization Capacity | ||
Compatibility | Co-existence Interoperability | ||
Usability | Appropriateness recognizability Learnability Operability User error protection User interface aesthetics Accessibility | ||
Reliability | Maturity Availability Fault tolerance Recoverability | ||
Security | Confidentiality Integrity Non-repudiation Accountability Authenticity | ||
Maintainability | Modularity Reusability Analyzability Modifiability Testability | ||
|
|
Usability and User Experience
User eXperience (UX) expands the term usability to include aesthetic and emotional factors such as an appealing, desirable design, aspects of confidence building, or satisfaction to use (e.g., pleasure, comfort). The context of using the system has a strong influence on the user experience as it may totally differ based on a number of factors such as location (e.g., the user is sitting behind a desk, driving a car or hiking), weather (e.g., sun, rain, cold), health conditions of the user (e.g., fatigue, age), environment (e.g., stressful, noisy).
UX requirements analysis is based upon the following four pillars:
- User analysis: Users are categorized in terms such as physical and intellectual characteristics, technical skills, business knowledge, socio-economic, and cultural background. Business analysts can also use models.
- Task analysis: Functionality is identified and formalized (e.g., through use cases and scenarios). User behavior and expectations are analyzed to design an optimized system or product.
- Context analysis: The context in which the system or product will be used is analyzed. External conditions (e.g. light, temperature, movement, humidity or dust), physical conditions (e.g., sitting, standing, lying, moving, hands-free) or “psychological” conditions (e.g. stress level, motivation, or the difference between private and professional usage) are considered to give directions to the subsequent design steps. Devices, platforms and form-factors (device-specific display) are also considered as part of the context.
- Competition analysis: Unless creating a disruptive design is the goal, business analysts should analyze the competitors and take inspiration from the successful implementation of their solutions to retain or attract users and customers. Another source of inspiration can come from successful solutions found in similar or even different sectors.
Due to common human limitations and biases (e.g., cognitive or perceptive biases, visual impairment, inexperience) some users might face more specific and sometimes severe difficulties in using software or products that are part of the business solution. Business analysts and testers should assess if products or services are accessible to all users by considering these limitations when designing acceptance criteria and test cases.
Usability Testing
There are different approaches to testing usability in acceptance testing:
- Checklist-based evaluations: Users evaluate the system or product under test according to checklists to evaluate, compare and qualify their experience.
- Expert reviews: Usability experts evaluate the usability of the system or product according to pre-defined criteria or checklists based upon usability heuristics to identify strong and weak points of an interface.
- Walkthrough and think-aloud techniques: Users explore the product or systems and describe their actions and impressions out loud while doing so. They may be given specific tasks to accomplish to identify how they interact with the product and to learn about expectations or difficulties.
- Biometrics-based evaluations: User behavior is monitored with specific biometric devices (e.g., eye-movement recording, mouse-eye-movement recording) to understand how the user interacts with a page or a system, what attracts their attention, or what is more or less visible.
- Log files analysis: Retrospective analysis is conducted to review how the users interacted with the system to discover areas for possible improvement and to verify if actual use correlates with the intended profile/use.
Performance Efficiency
Performance efficiency (or simply “performance”) is an essential part of providing a “good experience” for users when they use their applications on a variety of fixed and mobile platforms. Performance tests must be considered at all levels of testing.
During acceptance testing, performance tests are particularly addressed during Operational Acceptance Testing (OAT), usually by the operating teams. However, business analysts and testers should also be involved when developing acceptance criteria and related test cases. Acceptance criteria for performance efficiency requirements should provide objective measures, thus avoiding subjective performance evaluation during acceptance test execution.
High-level Performance Acceptance Tests
Performance testing aims to determine a system’s responsiveness and stability under certain conditions. In a typical performance test, concurrent users or transactions are simulated with specific tools to generate a given workload which mimics, as closely as
possible, actual conditions with real users and realistic interactions. The response times of key elements of the system under test (e.g., web server, application server, database) are then measured by a tool and compared to pre-defined performance requirements.
This can be also done for the use of memory, system input/output, CPU busy times, and access to security devices, depending on what component is (expected to be) the bottle neck or is targeted.
Based upon the analysis of results, specific elements in the architecture (hardware and software) may be modified (such as providing additional server capacity). The cycle of testing, analysis, and improvement may be repeated until the performance target is reached.
Different types of testing can be performed, depending on what needs to be measured. These include load, stress, and endurance / stability tests. Workload can be simulated by using different models: steady state, increasing, scenario-based or artificial.
Acceptance Criteria for Performance Acceptance Tests
Performance acceptance criteria can be expressed from different perspectives as shown in the following:
- From a user perspective, the perceived response time reflects the user’s real experience with the system. For example, users may abandon a web site if the response time is more than 10 seconds.
- From a business perspective, the number of concurrent users, the types of scenarios or transactions performed, and the expected response times are factors to be considered. Higher numbers of concurrent users performing resourceintensive transactions will result in longer response times. Other factors might also influence the response time based on location, time or time zone.
- From a technical perspective, available system resources (e.g., network bandwidth, CPU usage, RAM capacity) and system architecture, (e.g., server load balancing, use of data caching) are factors which influence performance efficiency. For example, web-based systems with limited network bandwidth will tend to have lower performance efficiency, especially when subjected to high loads caused by large numbers of users conducting tasks that generate significant network traffic.
The development of acceptance criteria and acceptance tests for performance requirements must address these three different perspectives (user, business and technical).
Security
Information security management and general security requirements should be part of an overall security policy for an organization. Business Analysts and testers should use the security policy for recommendations and guidelines, and as a basis for managing security risks on their projects.
Security requirements should be considered at all stages of business analysis, requirements engineering and related acceptance testing including the following:
- Information security should be part of risk management and non-functional requirements elicitation and analysis. The value of information in the system under test or in a given business process should be assessed, followed by an evaluation and prioritization of security risks.
- Measurable acceptance criteria should be defined for information security requirements. They may cover a large variety of aspects such as authentication, authorization and accounting procedures, sanitization of input data, use of cryptography, and data privacy constraints.
- High-level information security test cases should be defined according to the security requirements and the acceptance criteria. These test cases define the context of the test, the main steps and the expected results.
- Some security acceptance tests can be run by the acceptance tester and others by more specialized security testers, depending on the level of technical complexity of the test.
0 comments:
Post a Comment