top of page
hand-businesswoman-touching-hand-artificial-intelligence-meaning-technology-connection-go-

Fail Fast, Fail Smart: Optimizing TestNG for Robust Test Automation

Hello Techies,


                  Today I have come up with an interesting topic for SDETs which helps analyze the test output in TestNG execution. We relish in finding bugs however how can we be sure that the test failures after an automated test run are genuine product bugs, not some random failures? Such failures could happen due to a flaky environment, unresponsive browsers, third-party API glitches, unexpected delays in servers, network issues, and so on. To be sure, we can simply re-run the failed tests a few times to check if it’s consistently failing before marking them as ‘Failed’. This approach allows us to investigate product bugs further and address random failures by re-running them to move the build forward.


First, let me give a brief overview of TestNG.


TestNG (Test Next Generation) is a testing framework inspired by JUnit and NUnit, but introducing new functionalities to simplify a broad range of testing needs, from unit testing to integration testing.


Three simple steps to write and execute the tests in TestNG with Maven project are:

  1. Write the business logic in the methods and add TestNG Annotations to them.

  2. Create a testng.xml file to give the information about the tests.

  3. To run the tests, add the testng.xml file to Maven Surefire plugin in the pom.xml.


Coming back to the topic of re-running failed tests, we have already discussed why we need it and now let’s explore how we can achieve this using the built-in features of TestNG.


There are three different approaches available, and let’s see one by one.


1. Manual re-run using testng-failed.xml

To re-run test cases that failed in the last execution, you could use this approach. 


  • After each execution, all the failed test methods will be recorded in the testng-failed.xml file, located in the test-output folder.

  • This XML file contains all the information, including dependent methods, to re-run only the failed test cases.

  • By running this XML file, you can reproduce all the failed test cases for further analysis without having to

re-run the entire test suite.


Let’s understand this concept through an example.

I have created a class called ‘LoginPage’ with three test cases, one of which is failing due to an assertion error.


public class LoginPage {
	@Test
	private void TC01() {

		System.out.println("Test 1 of LoginPage Class");

	}
	@Test
	private void TC03() {

		System.out.println("Test 3 of LoginPage Class");

	}
	@Test
	private void TC02() {

		System.out.println("Test 2 of LoginPage Class");

		Assert.assertTrue(false, " Failing Test 2 for demo purpose");

	}
}

As TestNG execution starts from testng.xml file, I added the class ‘LoginPage’ to the XML file to run these test cases.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="Suite">
	<test name="Test">
		<classes>
			<class name="module1.LoginPage" />
		</classes>
	</test> <!--Test -->
</suite> <!--Suite -->

After the execution, the results will be displayed in the console as follows:

[RemoteTestNG] detected TestNG version 7.10.2
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Test 1 of LoginPage Class
Test 2 of LoginPage Class
Test 3 of LoginPage Class

===============================================
Suite
Total tests run: 3, Passes: 2, Failures: 1, Skips: 0
===============================================

As expected, 1 test case failed. Now we need to re-run the test cases to check the consistency of the failure before starting to analyze the root cause of it.


In this sample class, we could easily run the testng.xml file again as the test case count is small. However, imagine having 100 test cases and only one is failing, and to check that one test case, re-running all 100 test cases via testng.xml is time-consuming. This is where testng-failed.xml comes into play. This file contains only the failed test cases, making it more efficient to run in this scenario.


testng-failed.xml file for the above example is as follows:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="Failed suite [Failed suite [Suite]]" guice-stage="DEVELOPMENT">
  <test thread-count="5" name="Test(failed)(failed)">
    <classes>
      <class name="module1.LoginPage">
        <methods>
          <include name="TC02"/>
        </methods>
      </class> <!-- module1.LoginPage -->
    </classes>
  </test> <!-- Test(failed)(failed) -->
</suite> <!-- Failed suite [Failed suite [Suite]] -->

Running this xml file will give the following result.

It’s evident from the output that only the failed test case was re-run, not all the test cases, and that was our intention.

[RemoteTestNG] detected TestNG version 7.10.2
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Test 2 of LoginPage Class

===============================================
Failed suite [Suite]
Total tests run: 1, Passes: 0, Failures: 1, Skips: 0
===============================================

2. Automatic re-run using IRetryAnalyzer Interface

This approach will come in handy when you know which test case is likely to fail due to reasons other than a product defect.

Here's how to achieve this:

  • Create a class that implements the IRetryAnalyzer interface and overrides the "retry" method with the logic to re-run failed test cases a specific number of times.

  • Mention this class next to the @Test annotation of the test method that is likely to fail.

 

Let’s apply this concept to the already discussed example.


public class LoginPage {
	@Test
	private void TC01() {

		System.out.println("Test 1 of LoginPage Class");

	}
	@Test
	private void TC03() {

		System.out.println("Test 3 of LoginPage Class");

	}
	@Test (retryAnalyzer = Failed_Retry.class)
	private void TC02() {

		System.out.println("Test 2 of LoginPage Class");

		Assert.assertTrue(false, " Failing Test 2 for demo purpose");

	}
}

As we have already seen, test case ‘TC02’ of LoginPage class will fail because of an Assertion error. And now I want TestNG to re-run that test case 3 times to check the consistency of the failure before declaring it.

Hence, I have added the retryAnalyzer attribute to the test case with "Failed_Retry" class which implements the IRetryAnalyzer interface.


Let’s look into the logic of Failed_Retry class.

public class Failed_Retry implements IRetryAnalyzer {

	int retryCount = 0, maxRetryCount = 3;

	@Override
	public boolean retry(ITestResult result) {

		if (retryCount < maxRetryCount) {

			retryCount++;
			return true;
		}
		return false;
	}
}
/* retry() --- return true ---- test case needs to be rerun
   retry() --- return false --- no rerun --- control will go to the next test case to be executed */

This class will be invoked whenever test case TC02 fails. We can assign any number to the variable ‘maxRetryCount’, and the failed test case will be re-run that many times.


When you execute the ‘LoginPage’ class, you will receive the test output as follows.


[RemoteTestNG] detected TestNG version 7.10.2
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Test 1 of LoginPage Class
Test 2 of LoginPage Class
Test 2 of LoginPage Class
Test 2 of LoginPage Class
Test 2 of LoginPage Class
Test 3 of LoginPage Class
PASSED: module1.LoginPage.TC03
PASSED: module1.LoginPage.TC01
FAILED: module1.LoginPage.TC02
===============================================
    Default test
    Tests run: 3, Failures: 1, Skips: 0, Retries: 3
===============================================
===============================================
Default suite
Total tests run: 6, Passes: 2, Failures: 1, Skips: 0, Retries: 3

We can see that the 'Retries' count is 3, indicating that the failed test case was re-run 3 times, and the 'Total tests run' is 6, which includes the rerun count.


However, the drawback of this approach is that, even before test execution, you need to identify which test cases are likely to fail and link them to the retry method, which may not always be a feasible solution.


3. Automatic re-run using IAnnotationTransformer Interface

This is similar to the second approach, but it resolves the previously discussed disadvantage as it can be used even when you don’t know which test case is likely to fail.


Here's how to achieve this:

  • In the testng.xml file, include the <listeners> tag and add the listener class name in which you have implemented the IAnnotationTransformer interface.

  • In the listener class, use the setRetryAnalyzer() method to call the class in which you have implemented the IRetryAnalyzer interface (in our example, Failed_Retry class).


Let’s discuss this concept further with the 'LoginPage' class.

1. Adding <listeners> tag and the listener class (Failed_Transform) to the testng.xml file.


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="Suite">
	<listeners>
		<listener class-name="module1.Failed_Transform"></listener>
	</listeners>
	<test name="Test">
		<classes>
			<class name="module1.LoginPage" />
		</classes>
	</test> <!--Test -->
</suite> <!--Suite -->

Listeners are interfaces in TestNG that basically listens to the events happening during test execution. They can be used to manipulate the TestNG behavior by responding to those events.

In this example, we are using the IAnnotationTransformer listener. This listener will listen and respond to the test failure event.


2. In the listener class (Failed_Transform Class), overriding transform method of IAnnotationTransformer.

public class Failed_Transform implements IAnnotationTransformer {

	@Override
	public void transform(ITestAnnotation annotation, Class testClass, Constructor testConstructor, Method testMethod) {

		annotation.setRetryAnalyzer(Failed_Retry.class);

	}

}

So, whenever a test case fails, Failed_Transform class, which implements the 'IAnnotationTransformer' interface, will be invoked. It will then call the class implementing the IRetryAnalyzer interface, which is 'Failed_Retry' class in our example.

We have already seen how the ‘Failed_Retry’ class works in the previous approach. In summary, this class will help us re-run the failed tests a specific number of times.


Now, let's see how the execution of ‘LoginPage’ class progresses after applying this concept.

When you run the testng.xml file, you will get the following result.


[RemoteTestNG] detected TestNG version 7.10.2
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Test 1 of LoginPage Class
Test 2 of LoginPage Class
Test 2 of LoginPage Class
Test 2 of LoginPage Class
Test 2 of LoginPage Class
Test 3 of LoginPage Class

===============================================
Suite
Total tests run: 6, Passes: 2, Failures: 1, Skips: 0, Retries: 3
===============================================

As expected, the failed test case 'Test 2' was executed 4 times, with 3 retries included.


After considering all three approaches, it seems more practical to use IAnnotationTransformer to re-run tests, although this may vary depending on the project.


In this blog, we have explored how to leverage TestNG's built-in features to re-run failed test cases, and I encourage all readers to further explore this concept and share their insights in the comments section. Let’s learn together.

 

Enjoy learning!

8 views0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page