Business requirements are constantly evolving. We know this well and we try hard to take this into account while implementing projects for our customers. But hey, these are just words, what do they mean in real life? My aim in this post is to show you how we implemented a feature in a way that made it genuinely agile and ready for further extensions and future changes.
For the last year and a half, I’ve been working for our Danish customer, helping them to build and improve their main product for the pharmaceutical market. Recently, they came to us asking for a transactional email solution, integrated with their apps. The goal was to enable the sales representatives, who are the main users of the platform, to send email follow-ups to their clients. At first glance, it seemed straightforward but, as usual, there was a ‘but’. In this case, the feature was meant to be a closed loop marketing tool and therefore we had to deliver the email tracking data to the reporting facility: the data warehouse.

Choosing a long-term, agile solution

After thorough research, we decided to go with Sparkpost, as we found them to be one of the few transactional email service providers who are able to push tracking events via webhooks to our endpoints. We all agreed that such a solution was better than cron-based data polling. No surprise there!

So, back to the core. The webhooks sent by Sparkpost are JSON requests received by a Spring Boot based microservice, which in turn transforms them into our internal event format and pushes them into an event queue (RabbitMQ based). Nothing too fancy, you might say. The customer’s spec stated that only two types of events should be reported:

• When the recipient opens the message.
• When the recipient clicks a link in the message / downloads the content via a link.

As every event dispatched by Sparkpost has a unique GUID header, we used Redis as an event storage engine – the events are simply saved to Redis under their ID as a key.

However, bearing in mind that business requirements are constantly evolving, we decided to think ahead and subscribe to all event types, storing them and dispatching only the two selected types to the internal event queue. Thanks to this approach, our customer is able to take a more agile approach to their business – if in future, it turns out that other types of event would bring some added value to the product, we can always reanalyze those stored in the past and push them to the data warehouse.

A straightforward, controller-based approach

The simplest approach to implement this incoming request storage would be to add logic to the controller. The controller itself accepts a collection of SparkpostRequest objects.


@RestController
public class EventController {
    private ValueOperations<String, Object> redis;

    @RequestMapping(path = "/events", method = RequestMethod.POST)
    public void handleEvent(@Header("X-MessageSystems-Batch-ID") final String guid,
                            @RequestBody final List<SparkpostRequest> sparkpostRequests) {

        redis.set(guid, sparkpostRequests);

        /* ... */
    }
}

This should work fine and get you going fast. If desired, you could add a TTL of, for example, one year to the Redis set (see javadoc) if such stale data is of no value to your product. In our case, the storage size was not an issue and we didn’t set these keys to expire.

However, what happens if you have an error in your Jackson JSON mapping that isn’t discovered during testing? What happens if Sparkpost adds a new event type that you haven’t mapped in your EventType enum yet? You’ll lose data, of course. One way around this is reliable monitoring, which can detect such an error in the log, and you might manage to request the events via Sparkpost’s Message Events API before they are erased after 10 days… Fortunately, there are better and safer alternatives.

Injecting the logic into the request pipeline earlier

Instead of putting the event-storing logic into the controller, where the request body is already unmarshalled into an object thanks to the @RequestBody, we should inject our mechanism into an earlier stage.

We’ve found the following points where we can inject our logic before the controller:

  • Create a Servlet filter by implementing the doFilter method, which accepts a HttpServletRequest that you can read the headers from directly, as well as the request body’s InputStream. However, in order to make the request body stream available for the controller (the standard stream can be read only once) we needed to wrap the request into a multi-readable request and pass this down into the filter chain.
  • Implement a Spring-based HandlerInterceptor. Here, you can also access the HttpServletRequest and read the header / request’s InputStream. Unfortunately, we didn’t find any way to read the request body while still allowing the controller to read it for a second time independently, as the modified request object couldn’t be passed further down the pipeline. Interceptors just won’t do the job.
  • Implement a RequestBodyAdvice. Here, reading the request body is an easy task, but we have yet to find a way to read request headers. Let us know if you have any ideas!

Filter-based approach

As you might imagine, we ended up with a Servlet filter based approach. More specifically, we used a OncePerRequestFilter, but using a regular filter is quite similar. Take a look:


@Configuration
public class RedisFilterConfiguration {

    private static final String SPARKPOST_BATCH_ID_HTTP_HEADER = "X-MessageSystems-Batch-ID";
    private ValueOperations<String, String> redisValueOps;

    public RedisFilterConfiguration(final ValueOperations<String, String> redisValueOps) {
        this.redisValueOps = redisValueOps;
    }

    @Bean
    public Filter redisOncePerRequestFilter() {
        return new OncePerRequestFilter() {
            @Override
            protected void doFilterInternal(final HttpServletRequest request, final HttpServletResponse response,
                                            final FilterChain filterChain) throws ServletException, IOException {
                final ContentCachingRequestWrapper requestWrapper = new ContentCachingRequestWrapper(request);
                final String batchId = requestWrapper.getHeader(SPARKPOST_BATCH_ID_HTTP_HEADER);

                try {
                    filterChain.doFilter(requestWrapper, response);
                } finally {
                    final ContentCachingRequestWrapper wrapperAfterRequest = WebUtils.getNativeRequest(requestWrapper, ContentCachingRequestWrapper.class);
                    final String payload = new String(wrapperAfterRequest.getContentAsByteArray(), wrapperAfterRequest.getCharacterEncoding());
                    redisValueOps.set(batchId, payload);
                }
            }
        };
    }
}

It’s a complete, ready to use @Configuration class. Here’s a bit of explanation of the above snippet:

  • the doFilterInternal method wraps the request into a ContentCachingRequestWrapper, this means we can read it more than once,
  • we extract the Sparkpost ID from the header,
  • we pass the request down the filter chain (where it will be converted to a List<SparkpostRequests> and fed to the SparkpostController there),
  • once again on top of the stack, we then extract the request body from the request wrapper and store it in Redis and that’s it!

I hope you enjoyed our approach to the problem of analyzing external data while taking into account the evolution of business requirements which, in this case, was possible without any data loss. When new requirements do show up, we will still be able to reprocess events which were, in the first set of requirements, to be ignored (e.g. message bounce) or not fully parsed (e.g. IP geolocation data).

CHECK OUT A DEMO APP SHOWING HOW TO STORE REQUEST PAYLOAD IN REDIS USING SPRING MVC.

All feedback is more than welcome. Don’t hesitate to drop us a comment here or contact us on Facebook or Twitter, either the private or company profiles.

Holacracy why it’s a good idea to share the power
Business Featured Post

Holacracy: why it’s a good idea to share the power

We are on standing on the brink of the AI revolution. Researchers at the University of Oxford predict that in the next two decades up to 66% of American...

problems of software outsourcing
Business Featured Post

5 Problems of Software Outsourcing: a briefing for the decision-maker

The fourth article in our “CTO asks” series, addressing real issues, which CTO’s need to tackle in their daily work. This question was asked by Gianluca Bisceglie from Visyond....

What makes a great product owner XS blog
Business Featured Post

What makes a great Product Owner? A story behind iPhone’s success

This is the third article in our series “CTO asks” addressing real issues, which CTO’s need to tackle in their work. This question was asked by Cornel Studach from...

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

To find out more click here

Ok, Thanks