FEMA is seeking an integrated, scalable, near or real time, cost-efficient solution that provides an adaptive risk assessment and risk mitigation strategy to identify, design, and implement necessary controls for preventing improper payments as a result of fraud, waste, and abuse within Government disaster assistance programs
Determine if this application for disaster assistance potentially fraudulent?
Some of the rules are:
Social Security Number (SSN) is Valid
Social Security Number is a Multi-instance Number.
Duplicate Social Security Number for One Disaster
Social Security Number Belongs to a Minor
Duplicate Damaged Dwelling Address for One Disaster
Non-verifiable Damaged Dwelling Address
Verified Address Does Not Match.
Damaged Dwelling Address Zip Code not in Declared Disaster Area
Current Address Belongs to an Institution
Duplicate Current Mailing Address
Current Mailing Address outside Declared Disaster State(s).
Duplicate E-mail Address
Duplicate Electronic Funds Transfer (EFT) Account
Different Applications with Same Dependent(s).
Valid Dependent(s) in Household
Current Mailing Address is in Care of Applicant
No Home Insurance
The original form was written like this.
Social Security Number (SSN) is Valid. The SSN specified in the application for disaster assistance is validated against data from the Social Security Administration using the applicant’s first name, last name, damaged dwelling address, damaged dwelling phone number, and date of birth.
This is not a rule; it’s a procedure and it doesn’t actually state the conditions for the ssn to be valid.
So we need to convert it into a declarative rule format such as this:
If the applicant ssn is NOT in the SSA database then issue a warning
If the applicant last name does not match the name in the SSA database then issue a warning
If the applicant date of birth does not match the SSA date of birth then issue a warning
If the ssn is missing then issue a warning
If the format of the ssn is incorrect then issue a warning
So you can see the original statement actually corresponds to more than one rule.
By referring to the rule statements we can deduce the existence of objects such as these:
Person – who is submitting an application
Property – the damaged dwelling for which assistance is being sought
Application – the request for assistance
Social Security Administration Record – details of the ssn for the applicant
Government Assistance Data – details of other submitted, pending or processed applications
These will be modeled in the Corticon vocabulary:
Figure 1 The Business Entities
An application will have exactly one requesting person specified
An application will have exactly one damage property specified
Figure 2 Specifying Associations Between Entities
The role played by the person in this association is that of applicant
Here are some of the properties for the status attribute of the Government Assistance Database
Figure 3 Specifying Possible Values for an Attribute
The Entire vocabulary now looks like this in Corticon:
Figure 4 The Vocabulary
Notice how the Person and the Property that are associated with the Application are shown as
This can be done in the Corticon Test tab
Figure 5 Sample Test Data
In this test data we are simulating the existence of a database for the SSA data and the GAD data.
In later stages of development we will connect the vocabulary to a real database, but for now we can test our rules without all the complications of database access.
A database can be generated automatically from the vocabulary in Corticon Studio:
First the database properties must be entered:
Figure 6 Database Properties
This is for HSQL (which ships with the Corticon)
Next the business objects which are to be stored in the database must be marked as persistent:
Figure 7 Adding Key Attributes for Database Access
Also the attributes to be used to provide a unique key must be identified
This will create the tables in the fema database in HSQL
Figure 8 Creating a Figure 1 Database Automatically from Studio
Here is the SSADATABASE table
Figure 9 The Database Tables
The rows currently in the table are
Figure 10 Rows in the SSA Database
If the database already existed and the table names match the business object names and the column names match the attribute names then Corticon will automatically do the mapping. Otherwise you will need to manually match up the vocabulary to the table
There are no hard and fast rules about how to divide up the rules into groups, but in the case of validation type problems a good way to start is to group the rules according to the main attribute that they are validating
In this example there are actually several sets of rules that apply to the social security number.
We could put these all in a single rulesheet but in this case since there are some differences we will create a number of rulesheets to check the ssn.
The rule groupings (from the original word doc) are
So each group will become a rulesheet with tabs named as follows:
Figure 11 The Rulesheet Tabs
These are the rule statements that would be on the sheet SSNA
Figure 12 The Rule Statements
Now we are ready to model the rules.
Within this phase of rule modeling there are a number of steps
The SCOPE section of the rulesheet is a place where we can define the context for the rules on the rulesheet. Essentially we tell Corticon which of the business objects we wish it to use in the rules.
This is particularly important when the business objects contain references to other objects (associations)
Here’s what the scope section might look like:
Figure 13 The Rulesheet Scope
Line 1 refers to the Application and declares an ALIAS (called theApplication)
Line 2 refers to the applicant who is on the application
Line 3 refers to the property that is referred to on the application
The key here is that now theApplicant and theProperty have a common context
Then at line 4 we introduce another business object – this represents the Social Security Administration database that we would need to refer to in order to validate the social security number, the name and the date of birth.
Since the SSA database may contain millions of records we need to filter that down to just the records that match the ssn on our application.
This is done on the PRECONDTIONS/FILTERS section:
Figure 14 The Rulesheet Filter
This is a lot like the where clause in a SQL query.
In addition to checking that the ssn itself is valid we will need to compare the name on the application with the name on the SSA database. If the names are identical the making the match is easy, but if the names are slightly different we will need some way to assess how close the names are. For example if the last name on the application is Greem but the last name on the SSA database is Green do we consider this to be a close match given than “n” and “m” are adjacent on the keyboard and this could have been a simple typo.
This error might have arisen because of the very similar sounds of “n” and “m”.
We can make use of some special functions to help evaluating these possibilities.
First we can calculate something called the SOUNDEX value of the name – this generates a code based on the sound of the name rather than its spelling.
We can also make an assessment of whether the names might be the result of a simple miss-keying. This is a function called KEYBOARD SIMILARITY.
We can perform these calculations in a section called NON-CONDITIONAL RULES
Figure 15 Non Conditional Actions on a Rulesheet
So as an example the soundex value of “Green” is G65 and the soundex value of “Greem” is also G65.
So even though the names are spelled differently they have the same soundex value and could be considered to be the same.
However if the two names were “Green” and “Treen” then the soundexes would be very different. “Treen” becomes T65. But in this case “T” is right above “G” on the keyboard and could represent a miss keying. In which case we might want to accept that the names are the same.
Of course if the name was “Smith” then we’d conclude there is no match on the names
Here’s what the rules might look like
Figure 16 Decision Table Entries
Based on these rule statements
Figure 17 Rule Statements with Substitution Variables
Notice that we have customized the rule statements by adding references to the attributes. When the rules are executed these rule statements, along with the substituted values, will be part of the output from the rules.
The next step is to use the built in analysis tools to check if the rule model contains any logical errors.
There are three checks that can be performed automatically by Corticon
Figure 18 The Logical Analysis Buttons
In this case the ambiguity check shows that rulesheet SSNA contains no ambiguities:
Figure 19 Ambiguity Message
If ambiguities were detected then we can resolve them by either
Making the conditions more specific
Adding additional conditions that resolve the ambiguity
Noting that the ambiguity is acceptable – it may be appropriate for several rules to execute. E.g. data validation
The completeness checker confirms that rulesheet SSNA has no missing rules:
Figure 20 Completeness Message
If there were missing rules then Corticon would automatically add the missing conditions – the business analyst would then need to determine the appropriate action to take for each of the added rules:
If the added rules are essential cases (i.e. they have a distinct and different rule statement even if the outcome is the same) then add the appropriate actions
If the added cases all represent the default case (i.e they all have the same rule statement) then consider adding a single default statement in the non-conditional action section along with a single postInfo for the common rule statement.
If your rules are unambiguous and complete then you will need a test case for every rule (at least)
Generate Test Data
Hand craft a test case corresponding to each rule column
Copy the body of the condition values to a new rulesheet’s action section and use that to generate the necessary test cases automatically
Create a generic test data generator rule sheet (see rulesworld article on how to generate test data)
Import test data in XML format
Connect to a database to load test data
Corticon confirms there are no logical loops in rulesheet SSNA
Figure 21 Logical Loop Message
It’s important to get the right answer for the right reason
Indicate the expected result in the comments
If the result is one or more attribute values then create attributes that contains the expected result and then add a final rulesheet to compare the actuals with the expected
If the expected results are more complex structures then create parallel expected results business objects and add a rulesheet at the end to compare actual and expected
Here’s an example of some test results:
Figure 22 Test Results
Server installed on local machine under Tomcat or IIS
On the Corticon Cloud Server
Server installed under Websphere or Weblogic (probably not on the local machine)
Configure appropriate monitored attributes
Once the rule model is complete it can be deployed to the Corticon execution engine.
Corticon offers two modes of execution:
For the Web Services option the Corticon Web Console is used.
Only authorized users can login to the console:
Figure 23 Logging on to the Server Console
Figure 24 Server Console Options
The first option allows you to see the currently deployed decision services
Figure 25 Deployed Decision Services
By clicking on the number of executions you can view the performance statistics for that decision service:
Figure 26 Server Performance - Distribution Chart
Details of the decision service van be viewed by clicking on the name
Figure 27 Decision Service Settings
Various reports on the rule model can be generated from here too.
This report allows you to list all of the rule statements in the rule model:
Figure 28 Rule Statements Report
Other reports allows you to display more details of the actual rules
The second option is used to deploy a new decision service
Figure 29 Deploying a Decision Service
Now the new decision service shows up in the list of deployed decision services
Figure 30 Decision Service Names and Versions
From Studio we can invoke this newly deployed decision service.
The first step is to update the list of known deployed decision services:
Figure 31 Update List of Deployed Decision Services in Studio
In this example, Corticon has located decision services on several platforms
Figure 32 Deployed Decision Services Message
Four were found on the local machine, five were found on the Corticon Cloud Server and eleven were found on another machine.
These decision services will appear in the tester:
Figure 33 List of Available Decision Services for Testing
Once we have selected the one we want it will appear at the top of the test case:
Figure 34 Testing Using a Remote Decision Service
Now when we execute the test, the data will be sent to the Corticon server for execution.
After executing in once we will see that the server console is updated.
If we want to monitor the decisions made by the rules we can open the service configuration section:
Figure 35 Configuring Monitored Attributes
Currently no attributes are being monitored.
If we add some attributes like this
Figure 36 Monitored Attribute List
and then run more tests we will see something like this:
Figure 37 Monitored Attribute Charts
These same tests can be executed from any SOAP client such as SoapUI .
In order to do this you must first generate the WSDLK for the decision service.
This is done from the deployment console which is accessible in Studio:
Figure 38 Deployment Console in Studio
Once the WSDL is generated it can be imported into SOAP UI and will look something like this:
Test using application code (java or .NET, or BPM)
Corticon Collaborator is the tool that provides this functionality. Authorized users check in and checkout their rule models, vocabularies and test cases and a complete history of changes and versions is maintained. Here is an example of what a user might see:
Figure 39 Rule Repository Folders
Within a folder the user will see:
Figure 40 Repository Functions
The menu shows some of the functions that can be performed.
When a rule model needs to be moved into production, a workflow can be started in Collaborator which moves the assets through a series of approval steps such as notifying approvers by email, coordinating their responses and keeping an audit trail of the approvals
Here’s what a rule reviewer might see in her email inbox:
Figure 41 Email Notification of Rule Models to be Approved
And here is what might appear in the content of the email
Figure 42 Sample Email Notification
By following the link the rule approver can gain access to the rule asset: This may require that she logon
Figure 43 Access to the Repository is Controlled
After Jenny performs her review task, Tom (the rule author) will be notified by email:
Figure 44 Email Notification of Approval
This information is also maintained in the Collaborator audit trails.
Tom can monitor the progress of the approval process as follows:
Figure 45 Tracking Approval Workflows
You can see that there are several overdue tasks.
Tom can view the details of any of these tasks:
Figure 46 Approval Workflow Details
We can see in this example that Jenny (the QA person) approved it, *** (whose participation was optional) was skipped but Harry (the administrator) is holding things up
Corticon supports the deployment of rules and rule sets to various environments (i.e. development, staging, production).
Back in Collaborator we saw the folder that the rule author had access to. If we now login as the administrator we see more folders (which the rule author does not have access to). The administrator can see the development folder and also the UAT and production folders
Figure 47 Migrating Rule Assets to Production
In response to the rule author’s workflow, the administrator will migrate the rule assets from Development into UAT. Collaborator will keep a log of this activity. Once in UAT more testing and approval cycles will probably take place until eventually the rules get migrated into production.
Once the decision service is deployed on the production server it can be invoked by any SOAP Client or by any BPM software that can make a web services call or from application code such as Java or .NET
A generic SOAP client such as SOAP UI can be used to set up and run tests against the deployed decision service:
Figure 48 A Typical SOAP Client
The SOAP Request would look like this
Figure 49 Corticon SOAP Request
The results are resulted similarly to what you see in the Corticon tester:
Figure 50 Corticon SOAP Response
In fact you can export this soap message from the test cases in Studio:
Figure 51 Studio Test Data
By selecting from the Test menu
Figure 52 Studio Export SOAP Menu
Corticon also partners with many of the BPM vendors.
Here’s an example of a business process that invokes a decision service.
Most BPMs provide for the import of WSDL and automatically generate the necessary connection to external web services. All that is required to invoke the decision service is to map the process variables to the input variables of the decision service.
Figure 53 Decision Service Invoked from a Business Process
Decision Services may also be invoked from programming languages such as Java.
In this case, in addition to the web services interface, you can also make a direct in-process call. To do this the Corticon Server classes are compiled into your java program. Then you can invoke the server class “execute” method and pass the data either as XML or as native java objects.
Here’s an example of the core java code you would write to invoke a decision service:
Figure 54 Java Code to Invoke a Decision Service
The actual code would contain additional statements to populate the FEMAdata object , to catch any errors and to process the results coming back from the rule engine.
Execution Monitoring of Rules
Once the rules are in production and being used, our administrator may want to monitor how things are going (how many executions per second for example) and our business user may want to know how many executions resulted in High, Medium or Low risk.
This can be accomplished by using the Corticon Monitoring System
Figure 55 Server Environment Settings
The Server settings:
Figure 56 Server Settings
Sever Throughput Graph
Figure 57 Server Throughput Graph
Harry sees that performance is not as high as expected and recommends increasing the number of transactions that are sent in one call to the rule engine.
Immediately he sees the throughput jump from about 300 per second to almost 20,000 per second.
Figure 58 Server Response Time Graph
Decision Service Statistics
By examining the deployed Decision Services Harry can see that most of the transactions are using version 3, over 1 million in fact (the latest version) but a few transactions are still using the older versions.
Both Tom and Harry can monitor the number of transactions being processed by the server:
Figure 59 Deployed Decision Services
Tom can use the server monitor to see the distribution of decisions
Figure 60 Monitored Attributes
In this example he can see that about 60% of the 1.5 million transactions have been rated as low risk, with high and medium roughly equal at about 20% each.