Field definitions
#
StatusIn software testing, the Status refers to the current state or condition of a test case, test execution, defect, or any other testing-related item, indicating its progress, outcome, or position in the testing lifecycle.
#
Different types of possible StatusFields | Explanation |
---|---|
In Progress | In software testing, the "In Progress" status indicates that a test case, test execution, or testing activity is currently ongoing and has not yet been completed. |
Not Started | This status indicates that the test case has not been initiated or started yet. |
Started | This status means that the test case has been initiated and work has begun on executing the test steps or conducting the necessary activities. |
Overdue | This status indicates that the test case has exceeded the expected completion date or deadline. It implies that the test case should have been completed by now, but it is still pending. |
Complete-Not Tested | This status suggests that the test case has been marked as complete without being tested or executed. It might happen if the test case is determined to be unnecessary or redundant. |
Complete-Testing In Progress | This status indicates that the test case execution or testing is in progress, but it has not been concluded yet. It signifies that the work related to testing is ongoing. |
Complete-Testing Failed | This status implies that the test case execution or testing has been completed, but it has failed to meet the expected results or criteria. It indicates that the test case did not pass the intended test. |
Complete-Verified | This status suggests that the test case execution or testing has been completed successfully and the expected results have been achieved. It indicates that the test case has passed the intended test and the results have been verified. |
Rejected | This status suggests that the test case has been rejected or invalidated due to various reasons. It implies that the test case is not considered valid or suitable for the testing process. |
#
MilestoneA Milestone in software testing represents a significant event or achievement that marks the completion of a specific phase or objective in the testing process.
#
Different types of possible MilestoneFields | Description |
---|---|
QA Milestone | A QA milestone in software testing signifies a significant point of completion or achievement in quality assurance activities to ensure the desired level of software quality. |
Training Task Milestone | A training task milestone in software testing represents a significant point in the training process where specific training activities or objectives related to testing are completed. |
Practice Milestone | A practice milestone in software testing signifies a significant point in the testing process where testing techniques, methodologies, or skills are practiced and applied successfully. |
Test Milestone | A test milestone in software testing refers to a significant point in the testing process that marks the completion of a specific set of tests or the achievement of a testing objective. |
Backlog Milestone | A backlog milestone in software testing represents a significant point in the testing process where a specific backlog of testing tasks or defects is addressed and resolved. |
#
OwnerIn software testing, the Owner refers to the individual or team responsible for overseeing and managing the testing activities, ensuring quality, and coordinating with stakeholders.
#
DescriptionIn software testing, the Description refers to a concise and clear explanation of the purpose, scope, and details of a test case or test scenario.
#
FolderIn software testing, a Folder refers to a container or directory used to organize and categorize test cases, test scripts or other testing artifacts for efficient management and navigation.
#
FeatureIn software testing, a Feature refers to a distinct functionality or capability of a software system that is tested individually to ensure its proper functioning.
#
LabelIn software testing, a Label refers to a descriptive tag or identifier assigned to a test case, test suit, or defect, providing a way to categorize and organize them based on specific criteria or attributes.
#
PriorityIn software testing, Priority refers to the relative importance or urgency assigned to a defect, test case, or requirement, indicating the order in which it should be addressed or executed based on its significance.
#
Different types of Possible PrioritiesPriority | Description |
---|---|
P1 | P1 priority in software testing denotes the highest level of urgency and criticality assigned to a defect, test case, or requirement that requires immediate attention and resolution. |
P2 | P2 priority typically refers to a high level of urgency and importance assigned to a defect, test case, or requirement, indicating that it should be addressed promptly but with a slightly lower level of urgency compared to P1 priority items. |
P3 | P3 priority typically refers to a moderate level of urgency and importance assigned to a defect, test case, or requirement, indicating that it should be addressed in a timely manner, but with a lower level of urgency compare to P1 and P2 priority items. |
P4 | P4 priority typically refers to a low lever of urgency and importance assigned to a defect, test case, or requirement, indicating that it can be addressed with a lower priority compared to P1, P2, and P3 priority items. |
#
Start DateIn software testing, the Start Date refers to the planned or actual date on which a particular testing activity, phase, or test execution period is scheduled to begin.
#
End dateIn software testing, the End Date refers to the planned or actual date on which a particular testing activity, phase, or test execution period is expected to be completed or has been completed.
#
AutomatabilityAutomatability in software testing refers to the extent to which a test case or a specific testing activity can be automated using test automation tools or frameworks.
#
Different types of AutomatabilityAutomatability | Description |
---|---|
Automation | Automation in software testing refers to the use of tools, scripts, or frameworks to execute tests, perform actions, and verify expected outcomes without manual intervention. |
Easy to Automate | Easy to automate in software testing refers to test cases or scenarios that have clear and well-defined steps, predictable outcomes, and can be efficiently executed using automation tools or frameworks. |
Hard to Automate | Hard to automate in software testing refers to test cases or scenarios that involve complex user interactions, non-deterministic behaviour, or require human judgement, making it challenging to replicate and automate accurately. |
Not Automatable | Not automatable in software testing refers to test cases or scenarios that cannot be effectively or efficiently automated due to their inherent complexity, reliance on human judgement, or lack of appropriate automation tools or frameworks. |
Performance | Performance automatability in software testing refers to the degree to which performance testing activities and performance-related metrics can be automated using specialized tools or frameworks. |
Undefined | Undefined automatability in software testing refers to the lack of clear guidelines or criteria to determine whether a particular test case or testing activity can be effectively automated or not. |
#
Testing RequiredTesting required in software testing refers to the determination that a particular test or set of tests needs to be executed to validate the functionality, performance, or quality of the software being tested.
#
Testing IgnoredTesting ignored in software testing refers to a decision or action taken to intentionally skip or neglect the execution of a particular test or set of tests, potentially due to factors such as time constraints, low priority, or perceived insignificance.
#
PrivatePrivate refers to a visibility setting that restricts access to certain test cases, test suites, or testing artifacts, making them only accessible to authorized individuals or teams rather than the entire testing community.
#
Make PrivateMake Private refers to the action of changing visibility settings of test cases, test suites, or testing artifacts to restricts access, making them accessible only to authorized individuals or teams rather than the entire testing community.
#
RejectedRejected refers to the status of a test case, requirement, or defect that has been reviewed and determined as not meeting the specified criteria or not being valid, and thus, it is not considered for further action or implementation.
#
Not RejectedNot Rejected refers to the status of a test case, requirement, or defect that has not been deemed invalid or non-compliant during the review process, indicating that it is considered valid and may require further action or implementation.
#
VersionIn software testing, a Version refers to a distinct iteration or release of a software system or component, typically identified by a specific number or designation, representing a snapshot of the software at a particular point in time.
#
Different types of possible VersionsVersion | Description |
---|---|
Found Version | Found Version refers to the specific version or iteration of the software in which a particular defect or issue was identified or discovered. |
Fixed Version | Fixed Version refers to the specific version or iteration of the software in which a reported defect or issue has been resolved or fixed by the development team. |
Subversion | Subversion refers to a version control system that tracks changes to source code, allowing teams to manage and collaborate on software development projects effectively. |
Verified Version | Verified Version refers to the specific version or iteration of the software that has undergone testing and validation, confirming that it meets the specified requirements and quality standards. |
Branch | Branch refers to a parallel version of the source code or a seperate development line that allows independent development and testing of new features or bug fixes without affecting the main codebase. |
#
Ticket IDThe Ticket ID is a unique identifier assigned to a particular test case to track and manage its execution and results.
#
Test Case IDTest Case ID is a unique identifier assigned to a specific test case for easy reference, tracking, and documentation purposes.
#
Requirement IDRequirement ID refers to a unique identifier assigned to a specific requirement, linking test cases to corresponding software functionalities to ensure comprehensive test coverage.
#
ComplexityComplexity in test cases refers to the degree of intricacy and difficulty involved in executing a particular test scenario, often associated with the effort required for testing.
Here, One(1) indicating the most Simplest test cases.
And, Ten(10) indicating the most Complex test cases.
#
SchedulerIn software testing, a Scheduler is a tool or component that manages and automates the execution of test cases and test suites at specified times or intervals.
#
Webhook for CI/CDIn the context of CI/CD in software testing, a Webhook is an automated notification mechanism that allows one software application to send real-time data to another application when specific events or actions occur, enabling seamless integration and automation of the testing and deployment process.
#
Run/CI PresetsIn software testing, Run/CI Presets refer to predefined configurations or settings that steamline the process of running tests or continuous integration tasks with specific parameters, making it easier to manage and execute testing routines.
#
Work ScheduleA Work Schedule typically refers to a planned timeline or calendar that outlines when specific testing tasks, activities, or milestones should be completed during a testing project.
#
Test PlanA Test Plan in ZeuZ is a collection of presets that can be deployed quickly at a time.