Evaluating Exit Criteria and Reporting

Exit criteria is set of agreed conditions with stakeholders based on which you can officially mark the testing process to be completed for a particular test level. Exit criteria should be set for each test level.

In exit criteria evaluation we assess the test execution against the defined and agreed exit criteria for a particular test level. Based on this evaluation we can decide if we have actually done enough testing for that level to mark it officially complete.

Some common points that are mostly present in exit criteria are as follows:

  1. All critical, high and medium priority test cases are executed.
  2. No blocker, critical or high priority defects are outstanding.
  3. All medium and low priority defects are triaged and agreed to be differed or fixed.
  4. Agreed fixes for medium and low priority defects completed and retested.
  5. All the test documentation like test plan, test cases are documented and up to date in the test management tool.

The main tasks for evaluating exit criteria are as follows:

Checking the test logs against the defined exit criteria. For example, checking the test execution progress for all the critical, major and medium test cases. Checking the status of all outstanding defects and how many needs to be fixed and retested to fulfill the exit criteria.

Secondly we need to assess if there are more tests that need to be executed because of the product quality concerns (i.e. you found more than expected integration issues). You might want to add more tests for execution to lower the quality risks specified in the the exit criteria. Sometimes you might also need to modify the exit criteria with agreement from stakeholders if it had very strict criteria set initially but at current stage of project those criteria pose minimum risk so that exit criteria is met.

Write the test summary report . When you are evaluating the exit criteria its not only Development team and Test team need to know the outcome but also the broader set of stakeholders need to know what happened in testing levels so that they can make decisions about the software. For example, is it good to start the next testing level or if all the test levels have completed can the software go live in production.