diff --git a/docs/Quality Assurance/01-Quality Assurance Phase.md b/docs/Quality Assurance/01-Quality Assurance Phase.md new file mode 100644 index 0000000..7254f88 --- /dev/null +++ b/docs/Quality Assurance/01-Quality Assurance Phase.md @@ -0,0 +1,7 @@ +# Quality Assurance Phase + +Once the development of the software is complete in each sprint, it will be handed over to the Quality Assurance Team along with the technical documentations prepared in the Discovery Phase. + +Based on the provided technical documentations, the QA team will plan and test the system properly. When there is a bug or an improvement needed, it will be reported to the development team. Upon a fix by the Development Team, it will be re-tested by the QA team again. This cycle will continue until the software is stable to ensure delivering high-quality services that match user needs and stockholders expectations. + +It is highly recommended to involve the QA Team from the early stages of the Software Development Life Cycle (SDLC). Integrating QA practices at the beginning will result in a well-planned and executed project that successfully delivers high value to the stockholders. diff --git a/docs/Quality Assurance/02-Guidelines for Quality Assurance.md b/docs/Quality Assurance/02-Guidelines for Quality Assurance.md new file mode 100644 index 0000000..b20dde3 --- /dev/null +++ b/docs/Quality Assurance/02-Guidelines for Quality Assurance.md @@ -0,0 +1,11 @@ +# Guidelines for Quality Assurance + +A faulty application will cost a considerable amount of time, money and reputation. Having effective QA management ensures that developed software projects maintain quality, efficiency, consistency, meet client’s requirements, and are finished with minimal flaws and bugs. Without it, software development could be quite unreliable. + +While having a high quality product is crucial, it’s very important to have the product within the specified deadline. In order to achieve this, following steps are the best practices of how to improve the software testing process and to increase the quality of your software products within the deadline. + +1. [Test Planning](03-Test Planning.md) +2. [Test Design](04-Test Design.md) +3. [Test Execution](06-Test Execution.md) +4. [Defect Management](07-Defect Management.md) + \ No newline at end of file diff --git a/docs/Quality Assurance/03-Test Planning.md b/docs/Quality Assurance/03-Test Planning.md new file mode 100644 index 0000000..c3bfdf1 --- /dev/null +++ b/docs/Quality Assurance/03-Test Planning.md @@ -0,0 +1,19 @@ +# Test Planning + +Having a test plan before starting testing is crucial for the overall success of the QA process and results in saving time. QA test plans are documents that outline the steps required to perform proper QA testing; this includes the test strategy, objectives and resources required for testing. It also lists the topics that need testing, as well as the necessary timelines for the test conduction. A test plan will help identify potential problems early on, which saves time and money in the long run; it is more or less like a blueprint of how the testing activity is going to take place in a project. + +**A typical Test Plan Document has the following sections:** + +1. **INTRODUCTION:** A brief summary of the product being tested. Outline all the functions at a high level. +2. **SCOPE:** describes what features within the project to be tested and what not to be tested. +3. **TESTING STRATEGY:** Test Strategy is an outline or approach to the way testing will be carried out in the software development life cycle. Its purpose is to define the exact process the testing team will follow to achieve the organizational objectives from a testing perspective. +4. **Quality Objectives:** The overall objective that you plan to achieve with your testing that could be as follow: + - Ensure the application under test conforms to functional and non-functional requirements. + - Ensure the AUT meets the quality specifications defined by the client + - Bugs/issues are identified and fixed before going live +5. **Test Methodology:** Specify the testing methodology and the reason. +6. **Resources and Environment Requirements** + - **Hardware Requirements:** List all the devices that are needed to properly test the system including the quantity and the specifications. + - **Software Requirements:** List the software’s and tools required to be installed in addition to the specific software under test. +7. **Risks / Assumptions:** Specify all the risks or assumptions that are discovered during the testing process. +8. **Exit Criteria:** Define the criteria that will deem the testing is complete. diff --git a/docs/Quality Assurance/04-Test Design.md b/docs/Quality Assurance/04-Test Design.md new file mode 100644 index 0000000..2fdb8e2 --- /dev/null +++ b/docs/Quality Assurance/04-Test Design.md @@ -0,0 +1,110 @@ +# Test Design + +Boundary value analysis is based on testing at the boundaries between partitions. It includes maximum, minimum, inside or outside boundaries, typical values and error values. + +## 1. Boundary Value Analysis (BVA) + +Boundary value analysis is based on testing at the boundaries between partitions. It includes maximum, minimum, inside or outside boundaries, typical values and error values. + +It is generally the case where a large number of errors occur at the boundaries of the defined input values rather than the center. This is also known as BVA, and it gives a selection of test cases which exercise bounding values. This software testing technique is based on the principle that, if a system works well for these particular values, it will work perfectly well for all values which come between the two boundary values. + +### Guidelines for Boundary Value analysis + +- If an input condition is restricted between values x and y, the test cases should be designed with values x and y as well as values which are above and below x and y. +- If an input condition is a large number of values, the test case should be developed in a way that needs to exercise the minimum and maximum numbers. Here, values above and below the minimum and maximum values are also tested. +- Apply guidelines 1 and 2 to output conditions. It will give you an output which reflects the minimum and the maximum values expected. It also tests the below or above values. + +**Example:** Input condition is valid between `1 to 10` + +Boundary values `0, 1, 2 and 9, 10, 11` + +## 2. Equivalence Class Partitioning + +The equivalent class partitioning implies splitting test data into classes, whereas all elements are similar in some ways. This technique makes sense only if the components are similar and can fit in a common group. The equivalent class partitioning is a good solution for cases when you deal with a large volume of incoming data or numerous identical input variations. Otherwise, it might makes sense to cover a product with tests more closely. + +The concept behind this technique is that the test case of a representative value of each class is equal to a test of any other value of the same class. It allows you to identify valid as well as invalid equivalence classes. + +**Example:** + +Input conditions are valid between `1 to 10 and 20 to 30` + +Hence there are five equivalence classes: + +``` +< 0 (invalid) + +1 to 10 (valid) + +11 to 19 (invalid) + +20 to 30 (valid) + +> 31 (invalid) +``` + +You select values from each class, i.e., `-2, 3, 15, 25, 45` + +## 3. Decision Table Based Testing + +A decision table is also known as the Cause-Effect table. This software testing technique is used for functions which respond to a combination of inputs or events. For example, a submit button should be enabled if the user has entered all required fields. + +The first task is to identify functionalities where the output depends on a combination of inputs. If there are large input sets of combinations, divide it into smaller subsets which are helpful for managing a decision table. + +For every function, you need to create a table and list down all types of combinations of inputs and its respective outputs. This helps to identify a condition that is overlooked by the tester. + +### Following are steps to create a decision table: + +- Enlist the inputs in rows +- Enter all the rules in the column +- Fill the table with the different combination of inputs +- In the last row, note down the output against the input combination + +**Example:** A submit button in a contact form is enabled only when all the inputs are entered by the end user. + +| | Rule 1 | Rule 2 | Rule 3 | Rule 4 | Rule 5 | Rule 6 | Rule 7 | Rule 8 | +|---------|--------|--------|--------|--------|--------|--------|--------|--------| +| Input | | | | | | | | | +| Name | F | T | F | T | F | T | F | T | +| Email | F | F | T | T | F | F | T | T | +| Message | F | F | F | F | T | T | T | T | +| Output | | | | | | | | | +| Submit | F | F | F | F | F | F | F | T | + +## 4. State Transition + +In the State Transition technique, changes in input conditions change the state of the Application under Test (AUT). This testing technique allows the tester to test the behavior of an AUT. The tester can perform this action by entering various input conditions in a sequence. In State transition technique, the testing team provides positive as well as negative input test values for evaluating the system behavior. + +### Guideline for State Transition: + +- State transition should be used when a testing team is testing the application for a limited set of input values. +- The technique should be used when the testing team wants to test a sequence of events which happen in the application under test. + +**Example:** + +In the following example, if the user enters a valid password in any of the first three attempts the user will be able to log in successfully. If the user enters the invalid password in the first or second try, the user will be prompted to re-enter the password. When the user enters the password incorrectly for the third time, the action has been taken, and the account will be locked. + +### State transition diagram + +![](../../static/img/documentationGuidelineImgs/state_transition_diagram.png) + +## 5. Error Guessing + +Error guessing is the most experimental practice of all, usually applied along with another test design technique. In error guessing, a QA specialist predicts where errors are likely to appear, relying on previous experience, knowledge of the system, and product requirements. Thus, a QA specialist is to identify spots where defects tend to accumulate and pay increased attention to those areas. + +#### Guidelines for Error Guessing: +- The test should use the previous experience of testing similar applications +- Understanding of the system under test +- Knowledge of typical implementation errors +- Remember previously troubled areas +- Evaluate Historical data & Test results + +#### Example: + +QA engineers start with testing for common mistakes, such as: + +- Entering blank spaces in text fields. +- Pressing the Submit button without entering data. +- Entering invalid parameters (email address instead of a phone number, etc.). +- Uploading files that exceed the maximum limit. … And so on. + +The more experience a QA specialist has, the more error guessing scenarios they can come up with quickly. \ No newline at end of file diff --git a/docs/Quality Assurance/05-Test Case Design.md b/docs/Quality Assurance/05-Test Case Design.md new file mode 100644 index 0000000..8783226 --- /dev/null +++ b/docs/Quality Assurance/05-Test Case Design.md @@ -0,0 +1,32 @@ +# Test Case Design + +A Test Case is a set of actions executed to verify a particular feature or functionality of your software application. A Test Case contains test steps, test data, precondition, and post condition developed for specific test scenarios to verify any requirement. + +The test case includes specific variables or conditions, using which a testing engineer can compare expected and actual results to determine whether a software product is functioning as per the requirements of the customer. + +## Test Scenario Vs Test Case + +Test scenarios are rather vague and cover a wide range of possibilities. Testing is all about being very specific. + +For a Test Scenario: Check Login Functionality there many possible test cases are: + +- **Test Case 1:** Check results on entering valid User Id & Password +- **Test Case 2:** Check results on entering Invalid User ID & Password +- **Test Case 3:** Check response when a User ID is Empty & Login Button is pressed, and many more + +## The format of Standard Test Cases + +Below is a format of a standard login Test cases example: + +| Test Case ID | Test Case Description | Test Steps | Test Data | Expected Results | Actual Results | Pass/Fail | +|--------------|------------------------------------|--------------------------------------------------------------------------------------------------|------------------------------------------|-------------------------------------------|----------------|-----------------------------------| +| T01 | Check User Login with vaild Data |
  1. go to the App
  2. Enter UserID
  3. Enter password
  4. Click submit
| UserID: `kurd`
Password: `Kurd@gov2` | User should login into an application | As expected | Pass | +| T02 | Check User Login with invaild Data |
  1. go to the App
  2. Enter UserID
  3. Enter password
  4. Click submit
| UserID: `kurd`
Password: `test@gov2` | User should not login into an application | As expected | Pass | + +Many times the Test Steps are not simple as above, hence they need documentation. Also, the author of the test case may leave the organization or go on a vacation or is sick and off duty or is very busy with other critical tasks. A recent hire may be asked to execute the test case. Documented steps will help him and also facilitate reviews by other stakeholders. + +During test execution time, the tester will check expected results against actual results and assign a pass or fail status. + +That apart your test case -may have a field like, Pre – Condition which specifies things that must be in place before the test can run. For our test case, a pre-condition would be to have the application installed to have access to the Application under test. A test case may also include Post – Conditions which specifies anything that applies after the test case completes. For our test case, a post-condition would be time & date of login is stored in the database + +By the end of the test case execution phase, all the executed test cases should be recorded in a template like [this](https://govkrd.b-cdn.net/Digital%20Service%20Manual/Test%20Case%20Design%20Template.docx). \ No newline at end of file diff --git a/docs/Quality Assurance/06-Test Execution.md b/docs/Quality Assurance/06-Test Execution.md new file mode 100644 index 0000000..dfec2ed --- /dev/null +++ b/docs/Quality Assurance/06-Test Execution.md @@ -0,0 +1,43 @@ +# Test Execution + +Test execution is the process of executing the test cases and comparing the expected and actual results to ensure the fulfillment of the pre-defined requirements and specifications of the developed software product. Moreover, it is responsible for deciding the readiness of the software product. If the results of this execution are similar to the expected or desired results, the software product is considered ready to go to production. Otherwise, it may have to go through SDLC and STLC, again. + +## Software Testing Types + +Software testing is generally classified into two main broad categories: **functional testing** and **non-functional testing**. + +### 1. Functional Testing + +Functional testing involves the testing of the functional aspects of a software application. When you’re performing functional tests, you have to test each and every functionality. You need to see whether you’re getting the desired results or not. + +There are several types of functional testing, such as: + +- Unit testing +- Integration testing +- User Acceptance testing +- Smoke testing +- Sanity testing +- Regression testing +- Acceptance testing +- White box testing +- Black box testing +- Interface testing + +Functional tests are performed both manually and using automation tools. + +### 2. Non-functional Testing + +Non-functional testing is the testing of non-functional aspects of an application, such as performance, reliability, usability, security, and so on. Non-functional tests are performed after the functional tests. + +There are several types of non-functional testing, such as: + +- Performance testing +- Security testing +- Load testing +- Compatibility testing +- Usability testing +- Scalability testing +- Volume testing +- Stress testing +- Efficiency testing +- Reliability testing diff --git a/docs/Quality Assurance/07-Defect Management.md b/docs/Quality Assurance/07-Defect Management.md new file mode 100644 index 0000000..4ec6e67 --- /dev/null +++ b/docs/Quality Assurance/07-Defect Management.md @@ -0,0 +1,11 @@ +# Defect Management + +Defect Management is a method for identifying and resolving defects. The steps of a defect management cycle are as follows: + +1. Detection of a Defect +2. Categorization of Defects +3. Defect Fixing by Developers +4. QA team verification +5. Defect Closure + +All the defects that are found in the testing phase should be recorded in a template like [this](https://govkrd.b-cdn.net/Digital%20Service%20Manual/Defect%20Management%20Template%20Template.docx). diff --git a/static/img/documentationGuidelineImgs/state_transition_diagram.png b/static/img/documentationGuidelineImgs/state_transition_diagram.png new file mode 100644 index 0000000..8090ae1 Binary files /dev/null and b/static/img/documentationGuidelineImgs/state_transition_diagram.png differ