JPragma Blog

Pragmatic Developers, Elegant Software

Archive for the ‘Testing’ Category

Integration testing of a legacy code

leave a comment »

Wouldn’t it be nice to work only on greenfield projects with the team that shares the same vision and style? Unfortunately, we have to spend a significant portion of our professional life dealing with the messy legacy code. Such code is very hard to comprehend. It often consists of tangled classes, with some arbitrary division of responsibilities. Unit test coverage is often very low. Sometimes unit tests are formally there, but you can clearly see that they have been written after the fact, simply repeating the production code mess. These tests are very fragile and don’t provide adequate “safety net”.

Before trying to refactor such legacy code we must first create good end-to-end tests. Tests that are, as Kent Beck says, “sensitive to changes in system behavior, but insensitive to changes in code structure”.

Let’s assume your project is a typical SpringBoot application. We have some complex functionality, that works, but we don’t fully understand how. We want to do some “forensic analysis” of its implementation and cover it with a stable integration e2e test. We know that this functionality starts with externally exposed REST API. We know that after going through layers of services, managers, helpers, utility classes, repositories, etc. it stores some data in the database and also makes some calls to external systems using REST and/or JMS.

Here is the plan:

  1.  Assuming JUnit 5, create test class “XyzComponentIT” and annotate it with @ExtendWith(SpringExtension.class) and @ContextConfiguration(classes=XyzConfig.class). This XyzConfig class should be annotated with @TestConfiguration and will be used to create Spring beans using @Bean instead of relying on a component scan. This way we will know exactly which beans participate in our workflow.
  2.  Create the first bean for the entry point (REST controller) and inject it into the test using @Autowired
  3.  Create a test case method that invokes this entry point passing some typical payload into it.
  4.  If you try to run the test at this point, it will fail to initialize the Spring context – this is expected, we didn’t provide any controller’s dependencies yet. Let’s start creating them one by one.
  5.  Since we are writing end-to-end integration tests, we want to create beans using a real implementation of every dependency, except “edge components”. Edge components are those that encapsulate communication with external systems – e.g. database repositories, HTTP and JMS clients, etc.
  6. Beans of edge components should be mocked. One very useful technique here is to use “strict mocks”. “Strict mocks” fail on every call that is not explicitly stabbed. This way we can identify the exact external system communication and crosscheck it with the business requirements. Below I included a snippet of  Mockito based implementation of such strict mocks. For better readability, consider encapsulating such mock beans in properly designed test doubles.
  7.  Next, implement the actual test case method, stubbing calls that read external data and verifying essential interactions with the external systems. For example, if our workflow updates the customer profile in the database and sends notifications via email service, then we should verify all interactions with test doubles that encapsulate DAO and email server communication.

Integration tests described here are quite stable since they have minimal knowledge of the internal structure and implementation details of our components. Since they consider the whole system as a black box, they are an essential tool that gives us confidence while doing code refactoring.

Here is the GIST of a very simplified example:

 

Written by isaaclevin

April 10, 2020 at 11:23 am

Posted in Java, Testing

Properties of good tests

with 2 comments

I recently read a very interesting article called “Test Desiderata” published by Kent Beck. In this article, Kent describes the properties of good tests. He also mentions that tests of different levels (unit, integration, acceptance, etc.) should focus on different properties.

Here I’d like to discuss some of these properties and try to find some practical steps to improve our tests in regard to these properties.

Kent Beck says:

“Programmer (aka “unit” tests). Give up tests being predictive and inspiring for being writable, fast, and specific.”

Writable — tests should be cheap to write relative to the cost of the code being tested.

So, unit tests are our first line of defense. They should be extremely cheap to write, they should be easy to read and they must be very fast to execute.

Tools that help to increase the development speed of such tests are Spock Framework and Groovy in general. Even if the production code is in Java, I still prefer writing my unit tests in Groovy. “Optional typing”, “Multiline strings“, “Safe Navigation“, “Native syntax for data structures” and especially “Initializing beans with named parameters” are huge productivity boosters. Check out my presentation deck for a very basic introduction to Groovy. Spock takes productivity even further allowing you to write very concise and expressive tests. My favorite feature is Spock’s own mock/stub framework. It is way more readable than more commonly used Mockito. Enabling Groovy and Spock in Java projects is simply a matter of adding an appropriate Gradle or Maven dependency.

Here is somewhat controversial thought… If our tests are very cheap to write, maybe we should treat them as immutable. I recently saw a tweet suggesting that we should be allowed to write new unit tests or delete existing ones, never modify them. This might sound a bit extreme, but there is a rationale in it. Quite often our tests look good initially, but their quality and readability degrade with time. We refactor production code, which leads to tests failures, we realize that the test is outdated and we start quickly “patching” it.

Kent Beck says:

Inspiring — passing the tests should inspire confidence.

Structure-insensitive — tests should not change their result if the structure of the code changes.

Predictive — if the tests all pass, then the code under test should be suitable for production.

These properties are very important for integration/end-to-end tests. I discussed this in my previous post. When writing “stable” e2e tests, I use real components/collaborators for the entire system and mock only “edge” dependencies such as DAO Repositories, JMS abstraction interfaces, REST clients, etc. I prefer creating dedicated mock implementations of such “edge” components rather than using raw Mockito, JMock, Spock. This helps to make such tests readable for non-programmers. Typical e2e test starts specially configured application context, invokes input component and validates expected behavior at all “edges”. The value of such tests is enormous, they give us the confidence that the entire system is working properly. Being structure-insensitive, e2e tests serve us as safety-net when doing code refactoring.

In one of the future posts, I’ll cover my experience working with legacy code and how to efficiently create e2e tests there.

As usual comments, opinions, and criticism are more than welcome!

 

 

Written by isaaclevin

November 3, 2019 at 10:54 am

Posted in Java, Testing

Don’t over spec your tests

with 3 comments

Yesterday I had a discussion with my colleagues about the properties of good tests. I think in general tests have 4 purposes in the following increasing order of importance:
  1. Validate correctness of the system under test
  2. Document usage and expectations of the tested module
  3. Help designing component’s API and interactions (when practicing TDD)
  4. Provide a safety net that enables fearless refactoring
The last point is the most important in my opinion. To provide such safety net, tests must be, as stated by Kent Beck, “sensitive to changes in system behavior, but insensitive to changes in code structure”.
How to achieve this?
Perhaps we should prefer higher-level component/module tests. Such tests are quite more stable and insensitive to structural changes. We should limit the usage of mocks in such tests, probably only mocking collaborators that live outside of the component boundaries.
We should only verify interactions with collaborators that are essential to the business logic of our component.
What do I mean by that? Often I see unit tests where developers stub responses of component collaborators and then verify ALL these interactions. With Mockito they sometimes utilize “lenient” matchers like any() or isA() while stubbing; and “strict” matches like eq() while verifying. This technique is OK, but in my opinion, it should only be applied to true mocks – calls that are essential to the behavior of the system.
Calls to simple data providers (stubs) shouldn’t be verified at all, otherwise, it delivers wrong intentions of the code author as well as makes tests quite fragile.
The difference between stubs and mocks is greatly explained in this article by Martin Fowler.
What do you think? How do you make your tests insensitive to structural changes?

Written by isaaclevin

October 19, 2019 at 11:10 am

Posted in Java, Testing