Event-Driven Frameworks Using Camel and Drools

In the world of enterprise application development, complex system-to-system integrations can be an important element of the architecture. Companies generally handle traditional data exchange between partners via large bulk batch cycles. Over the past several years, however, many organizations have begun looking to operate in a near real-time mode. Batch cycles generally mean that integrated systems only sync a few times a day.  As users’ expectations continue to evolve, large enterprises that were able to get away with a 24-hour cycle two years ago are finding it harder and harder to maintain this position.

One way companies, like Benefitfocus, have met this challenge is to develop a real-time data exchange framework that facilitates a publish/subscribe transaction model. This can simplify the integration process, getting large complex projects to market quicker than traditional methods. Another benefit from this model is a higher degree of standard interface adoption. By presenting a standard API interface with connectivity protocol specifications, it makes it easier for trading partners to develop the appropriate transactions.

The core of this architecture is the Apache Camel Messaging Platform. At the most basic level, Camel is used to both produce and consume JMS messages for asynchronous processing. Messages are consumed and then "routed" to custom handler implementations. The handler is instantiated based on a message header (or a configured default) and is passed the body of the message as a string and a map of the JMS headers.

Camel is a good choice because "out of the box" you get robust error handling, transactions and priority consumption - all via configuration in spring beans and Camel routes. Camel also provides for loads of configuration via the connection URL.

Another important element of the architecture is the Drools Rules Engine. Within the routing flow, companies need to make decisions per trading partner as to which handlers to execute, as well as the selection of custom business logic for a particular transaction type. The message header of the transaction contains the message type, platform tenant and other key elements that create context. The transaction is then passed through the rules engine to calculate custom routes based on content inside of the transaction header.

The message body is based on a more specific context that makes up the payload for the event, usually XML. Based on the metadata in the header, companies can determine the appropriate event handler. The handler will be specific to the type of event that occurred and will determine the appropriate process or process flow to handle the actual event processing.

At Benefitfocus, we’ve found rules-driven event framework that sits on top of a metadata paradigm to be very versatile and compelling.  You can define transaction routes, services and endpoints via tenant by simply modifying metadata, ultimately providing a path to go-live that is more lightweight than traditional ETL methods.