Field definitions

Status#

In software testing, the Status refers to the current state or condition of a test case, test execution, defect, or any other testing-related item, indicating its progress, outcome, or position in the testing lifecycle.

Different types of possible Status#

FieldsExplanation
In ProgressIn software testing, the "In Progress" status indicates that a test case, test execution, or testing activity is currently ongoing and has not yet been completed.
Not StartedThis status indicates that the test case has not been initiated or started yet.
StartedThis status means that the test case has been initiated and work has begun on executing the test steps or conducting the necessary activities.
OverdueThis status indicates that the test case has exceeded the expected completion date or deadline. It implies that the test case should have been completed by now, but it is still pending.
Complete-Not TestedThis status suggests that the test case has been marked as complete without being tested or executed. It might happen if the test case is determined to be unnecessary or redundant.
Complete-Testing In ProgressThis status indicates that the test case execution or testing is in progress, but it has not been concluded yet. It signifies that the work related to testing is ongoing.
Complete-Testing FailedThis status implies that the test case execution or testing has been completed, but it has failed to meet the expected results or criteria. It indicates that the test case did not pass the intended test.
Complete-VerifiedThis status suggests that the test case execution or testing has been completed successfully and the expected results have been achieved. It indicates that the test case has passed the intended test and the results have been verified.
RejectedThis status suggests that the test case has been rejected or invalidated due to various reasons. It implies that the test case is not considered valid or suitable for the testing process.

Milestone#

A Milestone in software testing represents a significant event or achievement that marks the completion of a specific phase or objective in the testing process.

Different types of possible Milestone#

FieldsDescription
QA MilestoneA QA milestone in software testing signifies a significant point of completion or achievement in quality assurance activities to ensure the desired level of software quality.
Training Task MilestoneA training task milestone in software testing represents a significant point in the training process where specific training activities or objectives related to testing are completed.
Practice MilestoneA practice milestone in software testing signifies a significant point in the testing process where testing techniques, methodologies, or skills are practiced and applied successfully.
Test MilestoneA test milestone in software testing refers to a significant point in the testing process that marks the completion of a specific set of tests or the achievement of a testing objective.
Backlog MilestoneA backlog milestone in software testing represents a significant point in the testing process where a specific backlog of testing tasks or defects is addressed and resolved.

Owner#

In software testing, the Owner refers to the individual or team responsible for overseeing and managing the testing activities, ensuring quality, and coordinating with stakeholders.

Description#

In software testing, the Description refers to a concise and clear explanation of the purpose, scope, and details of a test case or test scenario.

Folder#

In software testing, a Folder refers to a container or directory used to organize and categorize test cases, test scripts or other testing artifacts for efficient management and navigation.

Feature#

In software testing, a Feature refers to a distinct functionality or capability of a software system that is tested individually to ensure its proper functioning.

Label#

In software testing, a Label refers to a descriptive tag or identifier assigned to a test case, test suit, or defect, providing a way to categorize and organize them based on specific criteria or attributes.

Priority#

In software testing, Priority refers to the relative importance or urgency assigned to a defect, test case, or requirement, indicating the order in which it should be addressed or executed based on its significance.

Different types of Possible Priorities#

PriorityDescription
P1P1 priority in software testing denotes the highest level of urgency and criticality assigned to a defect, test case, or requirement that requires immediate attention and resolution.
P2P2 priority typically refers to a high level of urgency and importance assigned to a defect, test case, or requirement, indicating that it should be addressed promptly but with a slightly lower level of urgency compared to P1 priority items.
P3P3 priority typically refers to a moderate level of urgency and importance assigned to a defect, test case, or requirement, indicating that it should be addressed in a timely manner, but with a lower level of urgency compare to P1 and P2 priority items.
P4P4 priority typically refers to a low lever of urgency and importance assigned to a defect, test case, or requirement, indicating that it can be addressed with a lower priority compared to P1, P2, and P3 priority items.

Start Date#

In software testing, the Start Date refers to the planned or actual date on which a particular testing activity, phase, or test execution period is scheduled to begin.

End date#

In software testing, the End Date refers to the planned or actual date on which a particular testing activity, phase, or test execution period is expected to be completed or has been completed.

Automatability#

Automatability in software testing refers to the extent to which a test case or a specific testing activity can be automated using test automation tools or frameworks.

Different types of Automatability#

AutomatabilityDescription
AutomationAutomation in software testing refers to the use of tools, scripts, or frameworks to execute tests, perform actions, and verify expected outcomes without manual intervention.
Easy to AutomateEasy to automate in software testing refers to test cases or scenarios that have clear and well-defined steps, predictable outcomes, and can be efficiently executed using automation tools or frameworks.
Hard to AutomateHard to automate in software testing refers to test cases or scenarios that involve complex user interactions, non-deterministic behaviour, or require human judgement, making it challenging to replicate and automate accurately.
Not AutomatableNot automatable in software testing refers to test cases or scenarios that cannot be effectively or efficiently automated due to their inherent complexity, reliance on human judgement, or lack of appropriate automation tools or frameworks.
PerformancePerformance automatability in software testing refers to the degree to which performance testing activities and performance-related metrics can be automated using specialized tools or frameworks.
UndefinedUndefined automatability in software testing refers to the lack of clear guidelines or criteria to determine whether a particular test case or testing activity can be effectively automated or not.

Testing Required#

Testing required in software testing refers to the determination that a particular test or set of tests needs to be executed to validate the functionality, performance, or quality of the software being tested.

Testing Ignored#

Testing ignored in software testing refers to a decision or action taken to intentionally skip or neglect the execution of a particular test or set of tests, potentially due to factors such as time constraints, low priority, or perceived insignificance.

Private#

Private refers to a visibility setting that restricts access to certain test cases, test suites, or testing artifacts, making them only accessible to authorized individuals or teams rather than the entire testing community.

Make Private#

Make Private refers to the action of changing visibility settings of test cases, test suites, or testing artifacts to restricts access, making them accessible only to authorized individuals or teams rather than the entire testing community.

Rejected#

Rejected refers to the status of a test case, requirement, or defect that has been reviewed and determined as not meeting the specified criteria or not being valid, and thus, it is not considered for further action or implementation.

Not Rejected#

Not Rejected refers to the status of a test case, requirement, or defect that has not been deemed invalid or non-compliant during the review process, indicating that it is considered valid and may require further action or implementation.

Version#

In software testing, a Version refers to a distinct iteration or release of a software system or component, typically identified by a specific number or designation, representing a snapshot of the software at a particular point in time.

Different types of possible Versions#

VersionDescription
Found VersionFound Version refers to the specific version or iteration of the software in which a particular defect or issue was identified or discovered.
Fixed VersionFixed Version refers to the specific version or iteration of the software in which a reported defect or issue has been resolved or fixed by the development team.
SubversionSubversion refers to a version control system that tracks changes to source code, allowing teams to manage and collaborate on software development projects effectively.
Verified VersionVerified Version refers to the specific version or iteration of the software that has undergone testing and validation, confirming that it meets the specified requirements and quality standards.
BranchBranch refers to a parallel version of the source code or a seperate development line that allows independent development and testing of new features or bug fixes without affecting the main codebase.

Ticket ID#

The Ticket ID is a unique identifier assigned to a particular test case to track and manage its execution and results.

Test Case ID#

Test Case ID is a unique identifier assigned to a specific test case for easy reference, tracking, and documentation purposes.

Requirement ID#

Requirement ID refers to a unique identifier assigned to a specific requirement, linking test cases to corresponding software functionalities to ensure comprehensive test coverage.

Complexity#

Complexity in test cases refers to the degree of intricacy and difficulty involved in executing a particular test scenario, often associated with the effort required for testing.

Here, One(1) indicating the most Simplest test cases.
And, Ten(10) indicating the most Complex test cases.

Scheduler#

In software testing, a Scheduler is a tool or component that manages and automates the execution of test cases and test suites at specified times or intervals.

Webhook for CI/CD#

In the context of CI/CD in software testing, a Webhook is an automated notification mechanism that allows one software application to send real-time data to another application when specific events or actions occur, enabling seamless integration and automation of the testing and deployment process.

Run/CI Presets#

In software testing, Run/CI Presets refer to predefined configurations or settings that steamline the process of running tests or continuous integration tasks with specific parameters, making it easier to manage and execute testing routines.

Work Schedule#

A Work Schedule typically refers to a planned timeline or calendar that outlines when specific testing tasks, activities, or milestones should be completed during a testing project.

Test Plan#

A Test Plan in ZeuZ is a collection of presets that can be deployed quickly at a time.