A great presentation with audio that talks about Junit 4, test naming and other things.Ā I liked this presentation because it starts expression Behaviour Driven Development concepts without actually using a Behaviour Driven Development Framework.Ā Additionally its one of the more in-depth presentations covering the new features of JUnit like Rules.
The presenter is John Ferguson Smart who runs testing and agile tools bootcamps around Australia and NZ (as well as elsewhere) so if you are in the vacinity you should consider looking into em.
My notes from the presentation:
Naming tests
Dont use the name testXXX ā you arenāt testing methods of your code
You are testing use cases ā what should your class be doing
Take out private methods, maybe extract to a separate class and test those separately if they are getting too big.
Use the word should.Ā Express an outcome
Class Name: When is a behaviour applicable
Method Name: When at is the behaviour testing
Tests become more readable.
Arrange-Act-Assert
Create the test collaborators – Inputs and Expected Outputs (Arrange)
Test the behaviour (Act)
Verify behaviour is correct (Assert)
Extending Hamcrest Matchers
Combine matchers together ā hasItem(someOtherHasPropertyMatcherYouPreviouslyDefined)
Create a new matcher ā
- Extend TypeSafeMatcher<type of thing you are checking against)
- Implement Constructor take inĀ a matcher of expected value
- Implement matchesSafely(type of thing you are checking against)
- Implement describeTo ā decorate the existing test result/matcher descriptionā¦Ā description.appendText(āWuzUp!ā); matcher.describeTo(Description);
- Create a Factory Class for your matchers with static factory methods to return a new matcher
- Use It
Multiple asserts per test are bad (see also Integration Tests are a Scam)
You can combine hamcrest matchers into one test
assertThat(itesm, allOf(hasSize(1), hasItem(ājavaā)));
assertThat(itesm, hasSize(greaterThan(1)));
The error messages will be cleaner too ā expect list size of one, and has item java but received <blah>
Parameterized Tests
Usually just one test per Parameterized test class ā they get run once
Ways to get test data
Use an xls spreadsheet source
Use Selenium 2ās WebElement to get a webpage
@FindBy(name="HD_EN") WebElement importerName;
@FindBy(name="HD_EA") WebElement importerAddress;
@FindBy(name="HD_ER") WebElement importerRepresentative;
// getters and setters
Ā // getter return importerName.getValue();
// setter public void setImporterName(String value) {
enter(value, into(importerName));
}
Smoke test to make sure getters and setters are correct
Make sure the annotations arenāt wrong
JUnit Rules
Delete folders after test run
@Rule public TemporaryFolder folder = new TemporaryFolder()
folder.newFile(āa file nameā);
ErrorCollection,
accumulate errors rather than fail on first.Ā This saves having to write 20 different tests with large setup that check 20 things on the same page (eg login and load webpage table then verify each cell)
@Rule public ErrorCollector collector = new ErrorCollector();
// in your test
Ā collector.addError(new Throwable(āblahā));
collector.addError(new Throwable(āsomething elseā));
collector.checkThat(result, yourMatcher);
The result will show āblahā, āsomething elseā and the result of your failed matcher, as well as fail the test.
TimeoutRules
When you know something should have a short response time, a DAO for example should be shorter than 1 second
@Rule public MethodRule globalTimeout = new Timeout(1000);
@Test public void catchInfiniteLoopInTest() { for(;;); }
Catch any major issues before they get into production and become embarassing
Verifier Rule
Something that happens after a test is completed, like an assert
Inject behaviour to make JUnit add checks after each testā¦. kind of like a test post-condition or invariant from Betrand Meyers OO Software construction, but just for the tests themselves.
private List<String> systemErrorMessages = new ArrayList<String>();
@Rule
public MethodRule verifier = new Verifier() {
@Override
public void verify() {
assertThat(systemErrorMessages.size(), is(0));
}};
A good example I see would be using it to tie up mock verification calls in EasyMock
Watchman Rule
Called when a test fails or succeeds.Ā Additional logging perhaps?Ā How about restarting a GUI, or take a screenshot when a test fails.
Categories
Group tests in your own hierarchy based on your classification of the test rather than writing test suites.Ā Performance testsĀ that integration tests.Ā Slow running or fast running tests?
You can setup plain old normal interfaces for your categories, and have them extend each other via subclassing.Ā There is no Junit annotation here to indicate its an interface for testing, so you can potentially use any interface in your source.Ā Iām not sure if this is good practice or not, but say you wanted all your DAO tests that implemented a GenericDAO to be tested, you could do thisā¦. or how about test all classes that implements Serializable?
You can annotate a test class, or tests methods with @Category(InterfaceName.class)
When running a category suite however you still need to include the classes to inspect as well as the category name.
@RunWith(Categories.class)
@IncludeCategory(PerformanceTests.class)
@SuiteClasses( { CellTest.class, WhenYouCreateANewUniverse.class })
public class PerformanceTestSuite {}
@RunWith(Categories.class)
@ExcludeCategory(PerformanceTests.class)
@SuiteClasses( { CellTest.class, WhenYouCreateANewUniverse.class })
public class PerformanceTestSuite {}
But how about scanning your whole test bed?Ā Can we programmatically inject suite classes and use them with Categories?Ā At this point it is a limitation unless you want to use a classpath hack.
Parallel Tests
If you have fast IO and multicore (like my new work PC ) and well written tests that donāt trodd on each others data.
U use Mavenās surefire 2.5 plugin to achieve this, and say methods, classes or both in parallel.Ā Classes is probably safer since most people write the test slipping thru later tests in the same class depend on earlier test methods accidentally.
Infinitest
This is a tool for IntelliJ and Eclipse that runs tests when you save your source and tells you if you have failed runs.Ā I remember twittering about how cool this would be if it existed a while back, and Iām glad I wasnt the only one with this idea and that someone actually implemented it .
Also there was a plugin for IntelliJ called Fireworks but I could never get it to run tests properly on my Windows PC; always complaining about not being able to find a JDK home .
This tool seems pretty cheap at $29 for an individual license, Iāll check it out and give it a shot.
http://improvingworks.com/products/infinitest/
What would be super cool is if it worked with Categories mentioned above, to be able to exclude GUI tests from being executed.Ā There may be a feature in Infinitest that handles it but Iād be keen to see.
Mockito
Iām traditionally an EasyMock guy but Mockito has always had good buzz.Ā At my new job we dont actually have a mocking framework yet so Iām keen to give it a look.
Mockito seems to have less verbose setup of tests, something that when learning EasyMock bashed me around a bit ā ever forget to get out of record mode and get a green test accidentally.
As per Integration Tests are a scam ypresso recommends, you can verify interactions, verify a method is being called with certain params.
Other Stuff (comments from the Q&A of the presso)
Hibernate mappings ā use an in-memory database
FEST asserts ā an alternate to Hamcrest that avoids the Generic issues that plague Hamcrest!!! (boy this frustrates me a lot as a Hamcrest user)
Cobertura ā a code coverage tool, alternate to Emma
Private methods shouldnāt be tested explicitly ā you should be able to sufficiently test a class by its public facing API.