Secure Transaction Service (IV) Know The Rules!

Business rules are logical statements that define the behavior and operation of a business. For example,“if a user cancels their subscription, send them an e-mail.”. These rules may be written in process documents or embedded in applications. The Business Rules Engine (BRE) type belong to a special and different animal in the software world. According to Wikipedia

A business rules engine is a software system that executes one or more business rules in a runtime production environment. The rules might come from a legal regulation (“An employee can be fired for any reason or no reason but not for an illegal reason”), a company policy (“All customers that spend more than $100 at one time will receive a 10% discount”), or other sources. A business rule system enables these company policies and other operational decisions to be defined, tested, executed and maintained separately from application code. Rule engines typically support rules, facts, priority (score), mutual exclusion, preconditions, and other functions.

Traditionally the BREs were included into the software systems as an integrated or separate piece. It was not strange to misunderstand the naturally decoupled position of a BRE in an architecture (acting even for several types of tenants). The result usually was, the BRE was integrated into monoliths as a module or as a tightly coupled component.

By using BREs integrated into monoliths all kind of flexibility and versatility was lost. Also, unfortunate developments in the past (e.g older versions of JBoss Drools) powered up this approach. Fortunately this wrong approach was fixed some time ago.

BREs in Distributed Cloud-Based Systems?

The Central Fexco API is a cloud native, distributed and decoupled system. But the integration of a BRE was actually challenging because of the extremely distributed nature of the architecture.

On the other side, there are good BREs in the market. And there are more and more features that drive us to Business Rules Management Systems (BRMS). Recent versions of BRMSs usually are expensive and are actually a complete autonomous sub-system that covers rules administration, rules design, etc. (and that’s why they are called Management Systems!). So, BRMSs are usually used to develop, arrange, store, edit and execute business rules. A BRMS acts as a central repository for business rules. Decision owners and IT employees can collaborate to develop, version, and edit rules in a single-sourced environment. A BRMS should help businesses to automate tasks, improve consistency and enhance the way business policies change.

From our consideration BRMSs are expensive, over-sized and under used components in most systems. Besides they are usually complex and difficult to integrate. Honestly, we could not think of a single scenario the required investment was worthy. Of course our usage of rules was thought in a standard way. That meant that most of the functions in BRMs would never be used. This consideration makes us start thinking on ellaborating our own solution in the Secure Transaction Service (STS) working in the Fexco Central API.

The STS Rules Processor

The STS Rules Processor (STS-RP) is composed of two main applications. They are extremely light (a UI app really simple and a microservice running in HA mode) using a simple rules structure. The rules structure is totally based on JSON that allows very fast storage and handling.

The good thing is, everything is simple and extensible. Administration is simple as well as we are handling JSON. The STS-RP is completely asynchronous and non blocking. The communications are asynchronous based on queues and topics to broadcast results.

On the other hand, this is a distributed system and we cannot force any component to change its behavior. The STS-RP simply listens, processes the rules as appropriate and returns a verdict from the rules evaluation, logging the whole process and the result. How the result from the rules is being managed by the business logic components is up to them.

We’ll have a look at more details.

A Tool for a Domain Specific Language

The core component of the STS-RP is based on ANTLR. Developed by Terence Parr (professor of Computer Science at the University of San Francisco), ANTLR or ANother Tool for Language Recognition is a real option to develop an interpreted language or also a DSL (Domain Specific Language). The difference between an all purpose Interpreted programming language and a DSL is clear. A DSL is a tool for programming solutions to a specific, narrow set of problems. It’s more specialized than the general-purpose programming language, which can be used to program solutions for many kinds of problems.

The process for creating a DSL is:

  1. Have a problem
  2. Decompose the problem into smaller parts to be solved, and solve it
  3. Have a bunch more similar problems
  4. Decompose those, too, and solve them
  5. Realize that many of the problems have similar decomposition – there are things you’re doing over and over again
  6. Decide to think of the common parts of the problem decomposition as “primitives” of a class of problems
  7. Write a tool to help you manipulate those primitives

Fair enough. Nevertheless DSLs are usually made by using runtime reflection (not familiar with this concept? The Spring framework is strictly based on it!).

Compile-time reflection is a powerful way to develop program transformers and generators, while runtime reflection is typically used to adapt the language semantics or to support very late binding between software components.

The problem of runtime reflection is that it affects the performance and the ability to change (flexibility to changes). Runtime reflection basically needs a collection of functions and classes that correspond to a vocabulary or lexer. Then, according to the string containing a term in the lexer an object is instantiated from the given class and a function is invoked. The result is not good as it affects the compilation and therefore the general performance of the app.
ANTLR4 is based on parsing strategies that differs from runtime reflection. This feature and the ability of this tool regarding flexbility and extensibility definitely made us adopt it as a solution for the STS-RP as it is based on a parse strategy.

The parsing tree as it is mentioned in the Bible

ANTLR parsers use a parsing technology called Adaptive LL(*) or ALL(*). The idea is to perform grammar analysis in a dynamic way (not statically) at runtime. The ALL(*) parsers are pre-generated and they have access to input sequences before the parser executes. So, they can recognize the sequences of chars by weaving through a custom grammar. The idea is to set custom grammar without the need of adapt it to the parsing strategy.

We won’t go deeper into details about how ANTLR works. You can find a full description and examples of ANTLR in this awesome book. Definitely recommendable!

The Structure of Rules

The organization of the entities in the STS-RP contains some new components that are needed to be more versatile and usable.

Rulebook: It is simply a rule set. The name of the rule book is used in the Central API as well as the key that transactions and other business logic use to check out the rules to meet.

Rule: The specific rule to meet described in three main sections: name, when, then. The rules do not contain discreet values, only placeholders to be populated with the data in the specs.

Specification (Spec): Each rule book contains a set of specs with parameters and information for the placeholders in the rules. The specs are modified by the STS-RP Manager app.

UISet: This entity is a collection of UI component names that would be affected in rules which evaluation supposes a modification in the UI.

Our little rule entities structure

The DSL Vocabulary and Grammar

ANTLR needs two components to work:

First, the Lexer or vocabulary. This is a collection of terms in the grammar and the expression as they will be found into the text to be parsed. For instance we can set simple terms for our conditions:

ISPREORDER: 'isPreOrder';
ISBUYORDER: 'isBuyOrder';
ISSELLORDER: 'isSellOrder';
ISANYORDER: 'isAnyOrder';
ISNEWCUSTOMER: 'isNewCustomer';
ISCASHPAYMENT: 'isCashPayment';
ISCARDPAYMENT: 'isCardPayment';

Secondly, the parser of the terms in the lexer/vocabulary. This is an arrangement of the terms in the lexer forming relationships and sentences as it will be found in the expressions that will be evaluated. Yes, you’re right, this is basically a GRAMMAR: We are telling the app what is the structure of the language. For instance:

parser grammar RulesParser;
options { tokenVocab=RulesLexer; }

operator: (BOOLEAN | GREATERTHAN | SMALLERTHAN | EQUALS);
bop: (AND | OR);
timeop: (IN);
op: (EQ | GT | LT | GTE | NEQ);
amount: (BASEAMOUNT | FXAMOUNT | HISTORICALAMOUNT);
condition: (ISPREORDER | ISBUYORDER | ISSELLORDER | ISANYORDER | ISNEWCUSTOMER | ISCASHPAYMENT | ISCARDPAYMENT);
uiset: (CUSTOMERSTANDARD | AUTHORIZATION | TRANSACTIONCOMMENT | CARDDETAILS | PURPOSEOFFUNDS | SOURCEOFFUNDS | BENEFICIALOWNER);

arithoperation: bop amount op DOUBLE;
timearithoperation: bop amount timeop INTEGER op DOUBLE;

thengroup: ADJECTIVE uiset;

booleanconditiongroup: bop condition;
conditiongroup: condition booleanconditiongroup* arithoperation* timearithoperation*;

name_rule: RULE_NAME_HEADER RULE_NAME;
when_rule: RULE_WHEN_HEADER conditiongroup;
then_rule: RULE_THEN_HEADER thengroup*;

compliancerule: name_rule when_rule then_rule;

The third step is to generate the classes that are needed to parse the expressions using this vocabulary. If you’re using the Intellij ANTLR4 plugin you’ll find the command for the generation of the classes when you right click on the g4 files.

The ANTLR4 Intellij plugin options makes easier our lives.

The result will be a set of new classes that will be arranged considering the g4 parser file.

The generated ANTLR4 classes. It’s better to have a specific package for this.

As you can see the Lexer and Parser files have the g4 file extension. Several classes will be generated. If you use Intellij as your IDE the ANTLR4 plugin works really well.

Parsing Rules

Once you’ve got the ANTLR4 classes the next step is to implement the logic behind the expressions and terms in the vocabulary. However, the structure of a rule is really simple, composed by three main sections: Name (identification of the rule that allow to be matched to specific circumstances), When (the set of conditions to be met) and Then (actions to take). For instance:

NAME Customer_Details_Required_Cash WHEN isBuyOrder AND isCashPayment AND baseAmount > 199.99 THEN required customerStandard

And yes, this is a rule. Really easy, right? The parser makes the magic and we are in charge of evaluating the conditions and results.

The Rules Processor in the System

The STS-RP is completely asynchronous, being fed through a specific queue and broadcasting the result of the evaluation through a topic. The Central API component sends a message to the STS-RP queue containing a standard event batch. That’s all. No special messages, content or formats are needed. The STS-RP interprets the content in the event batch to apply the rule book properly according to the information in each event and the type of event contained in the event batch. Once the evaluation is done the result is sent through the output topic to be listened by the subscribed components that act accordingly. The correlation ID of each event batch is the main criteria to filter messages in the topic and it is used by the topic subscribers.

The Secure Transaction Service Rules Processor. It acts as a lightweight BRMS being totally flexible and extensible regarding the set of rules and their configuration.

Conclusions

  • Using a commercial BRMS does not make a lot of sense if you do not need tens of functionalities that you’ll never use.
  • By using ANTLR4 we can define custom grammar adapted to standard rules syntax (name, when, then).
  • It’s much better and we’ll get much better performance and extensibility by using the parser approach rather than plain runtime refection. In this case ANTLR4 is an excellent tool.
  • It is needed to set up a due entities hierarchy to put the information in the right place. Our objective is to minimize the scope of changes by using placeholders and dependent entities able to be modified in the management app.
  • By using this approach we can make a management app really fast as we need to manage JSON structures.
  • Data storage as CosmosDB are ideal for this kind of data structures. The performance metrics are amazing.
  • We have to implement the evaluation and interpretation of the rules to notify the rules clients about it. This task is much easier if we have solved the problem of the parser. Again, ANTLR4 has played a very important role.

I know you’re thinking: “Wow! I can make my own interpreted programming language with ANTLR!”. Stop thinking like that. Nobody needs a new interpreted language.

Cheers!

Jesus de Diego

Author: Jesus de Diego

Software Architect and Team Lead

2 Replies to “Secure Transaction Service (IV) Know The Rules!”

  1. Excellent article, well explained, looking forward to follow-on articles showing how business processes have been built that use this model.

    1. The key change here is the nature of the architecture, moving from monoliths where everything happens (inside) and distributed models (MSA for instance). Tightly coupled rules engines are not usable anymore and we have to apply rules-based validations in a different way, as external components.
      The second issue was flexibility. The former process to create new rules or modify existing rules was painful and copy/paste is everywhere with terrible results. By using simple JSON structures we can manage (create, modify, delete, disable, enable) the rules sets in a really easy way with immediate effects.
      Finally, the flexibility and extensibility of the rules sets is based on the dictionary. We’ve found in ANTLR an excellent tool to create and maintain dictionaries and grammars that we can manage. Of course, the implementation of the validation for the new/modified terms is still needed but it’s expected to talk about automatic generation of code in a new post, coming soon!
      Cheers!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.