Exploratory testing can be termed: the freedom of the tester’s passion to appreciate the software. Most of the script-based testing is observed to work under some commonly followed work procedures. But under some of the varying demands of the customer requirements, the application may also be used under different work flows. Under these conditions there could be a major loss to the customer if the system fails to work as intended; leading to lot of commotion between the customer and vendor.
Exploratory testing is a way of exploring the application or system under test with a rigorous set of ‘on the go’ scenarios based on the tester’s application and domain knowledge in order to facilitate early defect detection. During this exploration, a software tester brings his experience and knowledge and eliminates some of the commonly followed test practices. He/She avoids common test regions which are frequently evaluated. A software tester tries to challenge his domain experience and evaluates his clarifications and questions. He/she tries to evaluate the software under different work flows. This not only helps to master the tester’s knowledge but also to assure that the software application is evaluated at the proper level of intensity. This brings a factor of comfort to the tester on the software under test.
Exploratory Testing – Need of the Hour
Customers and project management are always looking for faster and more efficient ways of testing. Using the various automation tools have definitely helped us to reduce the testing duration and achieve the “faster” aspect that the customers are looking for, but one cannot call this approach efficient, for the simple reason that the business intelligence built into automation scripts cannot take actions (corrective or proactive) for the changing business needs on its own. The fact remains that automation scripts can never replace manual testing and only an organization with a perfect blend of automation and manual scenarios can achieve high quality deliverables.
In the traditional script-based approach to testing, tester’s often spend half of their efforts on identifying and scripting scenarios, and the rest of the time in execution and defect reporting. Testers often feel handicapped when pressure is exerted to follow traditional processes. They are expected to execute and document the observations during the test execution phase and complete the testing process within the stipulated time. Due to this expectation, testers are forced to limit the validation to the available test scripts. The test execution phase tends to be monotonous due to the scripted testing culture that takes away the actual joy of learning the system and the domain under test. In the course of the execution phase, testers go for adhoc testing only if the tester is lucky enough to complete the identified test scripts well before the scheduled due date. However it is too late in the test life cycle by then and any defect detected will create tensions and panic in the project implementation team.
The exploratory testing approach provides the tester the necessary freedom, and responsibility to help him go the extra mile to fully test the system.. This personal freedom keeps the tester enthused and in high spirits throughout the testing life cycle. This leaves the tester free to explore the system and learn more. The tester is not restricted to executing a predefined set of test scripts but rather to go ahead and proactively explore the scenarios based on his intuition and knowledge about a weak or risky area of code.
Challenges in Exploratory Testing Approach
The whole essence of success in the implementation of exploratory testing is in the hands of a tester. In order to be effective in exploratory testing, the tester must have a sound knowledge about the domain and application under test, or at least be well equipped with the necessary documentation about the featured functionalities. The exploratory testing approach may prove disastrous in the hands of an inexperienced tester.
One major disadvantage of exploratory testing is not having the scenarios defined at an early stage or no review and approval of the scenarios by the technical and business experts. Defining the test coverage and traceability is also a major challenge. Other challenges like estimating the test effort, calculating the productivity, encouraging re-usability etc… are additional challenges to consider.
However, one should bear in mind that exploratory testing is not a testing technique. It is an approach to be followed to achieve the best results. A good blend of exploratory and scripted testing can create great results. It is a good practice to take a look at the testing approach and re-define it based on the project complexity and the available expertise to test the change.
Session Based Test Management (SBTM)
One of the major challenges in exploratory testing is tracking the testing progress and sharing the test results with the project team and the business. As I mentioned earlier, exploratory testers are vested with the freedom to perform the testing based on their intuitions, skillset and experience. To a manager, collating the status and reporting the progress may be a major challenge as there are no predefined scenarios, coverage or traceability being utilized which can be used to measure the progress. In order to address such challenges there are several tools and processes available for effective exploratory testing and management.
James Bach calls SBTM structured exploratory testing where structure does not mean the testing is pre-scripted. SBTM is a means of setting up expectations for the kind of work that will be done and how it will be reported. It brings accountability to exploratory testing and helps the management with metrics which can be used for evaluating efficiency and performance.
SBTM comprises of 5 basic elements:
- Charter
- Session
- Session Report
- Debrief
- Parsing Results
Charter: Testing is a mission. When an application or system is handed over to testing, the testing team will build a mission or goal and work towards meeting them. The mission/goal is to set up as part of SBTM for a given project and is commonly known as a Charter. In the traditional scripted method, a charter can be compared to setting up a high level test plan or scenario, yet it is not a one on one comparison. A few examples of charter in SBTM:
SESSION 2 CHARTER | |||
Project | XXX for ABC | Manager | Joe |
Session # | 2 | Session Type | Functional |
Charter Plan Date | 2/7/2011 | Estimated Duration | 4 hours |
Session Goal | |||
|
Session: Session is the basic testing work unit in SBTM. Often a tester spends time on various activities like reading/responding to emails, team meetings, organization wide activities etc. which cannot be accounted for in the effort spent on testing. The uninterrupted time spent on testing in SBTM is known as a session. James Bach and team call sessions an uninterrupted block of reviewable, chartered test effort. Each session is associated with a mission or charter and hence the term ‘chartered test effort’.
Session Report: Session report records the details of events and areas covered in a session. It is comprised of the Charter detail, Tester details, Session date/time/duration, task breakdown, test data details, notes, bugs and issues. Below is a sample session report:
SESSION 2 REPORT | |||
Project | XXX for ABC | Manager | Joe |
Session # | 2 | Session Type | Functional |
Session Date | 2/8/2011 | Session Time | 10:30 AM |
Lead Tester | Tim | Duration | 3.5 Hours |
# of test designed and executed | 30 | ||
# of Bug Investigation and reporting | 2 | ||
Session set up time | 0.5 Hours | ||
# of requirements covered | 10 (R2371 to R2380) | ||
Test data files | PolicyNumbers.xls Credentials.txt |
||
Session Notes | |||
|
|||
Issues | |||
|
Debrief:Debrief is a meeting immediately following the session. The tester provides the summary of the session, results, output and follow-up to the session manager or the test lead. The lead then examines the session report and summary, and suggests any improvement in the testing approach to the tester.
Parsing Results: Debrief meeting will help the lead or the manager to collate session metric data like session duration, # of defects detected, % progress achieved, efficiency of the session etc. The results are parsed along with the development and project management team and defects are assigned to the developers.
Estimation
Estimating testing activity is always tricky and complex. In traditional methods of testing, we use the basic test unit – either a test case or a test step for providing the estimation. Exploratory testing, can also use sessions as the basic unit of testing. Based on the requirements and project complexity, the tester should define a way to determine the number of sessions required to have complete coverage of the requirements which in turn can be used to estimate the effort.
Start with,
- Use the current estimation process in the project to estimate the testing effort
- Calculate the number of days available for test – say 20 days
- Calculate the essential Non Value Add (NVA) testing activities – like meetings, team building activities, leaves etc. Assume 2 days. So the available testing time is 18 days
- Assuming the team will be able to conduct 2 sessions per day, it will be 36 sessions
- Now all you have to do is split the requirements (both functional and non-functional) into the number of sessions calculated
Summarizing the Exploration
When a tester is free to dig into the depths of the system under test, he/she is more capable of unveiling the hidden bugs than a tester who is tied to a particular script or test case. A perfect blend of exploratory and script based testing paves way to effective and efficient ways of delivering quality product in a short period of time. However, a free tester is susceptible to go off track digging only into a particular area of interest in the system paving the way for the need of a strong tracking system. The contradicting ‘freedom’ and ‘tracking’ is best tied with the help of SBTM. Every project is unique, and the planning should take into consideration the risk, complexity, expertise and available time before deciding on the testing approach.
About the Author
Supriya Nayak