Photo by Lukas Blazek on Unsplash
A Guide on How to Write Test Cases
How do you write Test Cases? Here’s a Helpful Guide on Tools to use, Formats, and Best Practices
What is a Test Case?
According to Guru99, a Test Case is "a set of actions executed to verify a particular feature or functionality of your software application." Unlike Test Scenarios which are a bit more vague and cover a wider range of scenarios, test cases are more specific, citing more specific conditions to be tested.
Test Case Creation Tools
Here are some helpful tools that I've personally tried using when creating test cases:
Microsoft Excel
Google Sheets
Tricentis Test Management
Test Case Format
Test Cases have different formats depending on the standards of the organization. The following is a format that I've been using for years:
Sheet 0: File Title
- Component_TestType_YYYYMMDD-<version-number>
Sheet 1: Title Page
Application/Component to be tested
Acronyms and meanings (if applicable)
Document version
Sheet 2: Document History
Date
Document Version
Changes Made
Author
Sheet 3: Configuration Sheet
Test Environment - include whatever is applicable to your test
OS:
OS version:
Browser:
Kernel version: (usually for Linux-based OS)
Application Version
Platform version
BIOS version
Hardware Version
Firmware Version
Sheet 4: Test Case Sheet
Sheet Name - Base the sheet name on the component or feature being tested
Tester - who executed the tests
Author - who wrote the test cases
Reviewer - who reviewed the test document
Test Cases
Test Case No. - based on the page/component to be tested, e.g. XXX-0001
Test Scenario - Test Scenario covering the test case to be done, e.g. Sign Up
Test Case - The specific test to be done, e.g. Sign up using Google SSO
Description - A description of the Test Case (if applicable)
Pre-conditions - conditions that need to be satisfied before the test is executed
Test Steps - steps to be followed to ensure that the test case is properly executed
Post-conditions - expected conditions of the application after the test steps are executed
Expected Results - results that are expected following the execution of the test case. This becomes the basis for the PASS/FAIL result
Test Results - PASS/FAIL/SKIPPED, OK/NG/SKIPPED. The words used for the results are usually dependent on the standards of the organization, but PASS/FAIL/SKIPPED are the standard
Test Run Date - the date when the test case was executed. It's expected that all the tests for this document are executed on the same day to ensure that there are no variable changes. However, there are possible exceptions to this like "Long Run Tests", and "Sleep Tests" among others, which require a day or more to execute.
Remarks - any notes or links for references
Best Practices
Simplicity - Test Cases should be simple and not overly complicated. They must be as clear and concise as much as possible so that even if the tester is not the author of the document, they would not have a difficult time running the tests.
Identifiability - Test Cases are identifiable with their Test Case Number. This should be unique to each test case to avoid mistakes in identifying which test case is which and for what component.
Understandability - Test Cases should be easy to understand and are self-explanatory
Traceability - the test cases must be mappable to the requirements documents (Software Development Requirements, Basic Requirements, Detailed Requirements, etc). Functionalities and tests should not be assumed when writing the test cases.
Consider Use Case Scenarios - As Use Case Scenarios were written with the roles of specific users in mind, it's helpful to refer to Use Cases when creating test cases.
Ensure 100% Coverage - test cases must cover all of the requirements. This could be checked by referring to an RTM (Requirements Traceability Matrix)
Do Not Repeat Yourself (DRY) - Do not repeat test cases. If test cases are needed to execute a subsequent test case, it can be referenced using its Test Case Number.
Updatability - Test Cases should be updatable to keep up when there are changes in the requirements or scope of the application
Make use of existing test techniques and principles
Boundary Value Analysis (BVA) - testing values that are on the boundaries of the accepted inputs to avoid exhaustive testing
Equivalence Partition - dividing test values into partitions that could cover the specified values in the range to avoid exhaustive testing
State Transition Technique - testing to how switching between states after doing certain actions affects the behavior of the application
Pareto Principle - the 80-20 principle, where 80% of bugs are found in 20% of the components. Simply put, this means that most bugs can usually be found in the same components
Self-cleaning - after executing the test cases, the environment should not be rendered unusable; the environment should be able to be reverted back to its state before the test case was executed.
Use an ACTIVE VOICE when writing the test cases
A single test case should not have more than 15 steps. Otherwise, consider dividing the test case into parts.