top of page
Search

AI-Driven Approaches to Rule Induction in Crisis Management and Decision Making Support


Need for Automated Critical Decision Making Support

Decision making support is expected to become a vital part of high-stake industries such as crisis management, military and defense. Conventional rule engines allow for the manual coding or specification of rules in a system that suggest decisions based on how the data relates to the predefined rules. A common flaw that exists with this approach is that a human operator may make a real-world decision that diverges from the predefined rules. This problem hints at one of two non-ideal scenarios:


  1. The operator is making a poor decision that does not correspond to a rule.

  2. The rule engine is outdated and is not accounting for appropriate procedures for new and existing scenarios.


In such cases, a comparison must be made between the predefined rules that determine a decision to be suggested based on input data and the actual decision made by a human operator. The goal of this comparison is to determine if a new rule must be created or if the human operator made a mistake, indicating that the rule engine is fine as it is.


Rule Induction with Artificial Intelligence

Here we are introduced to the concept of rule induction. Systems typically make use of predefined baseline rules which are defined by humans, especially in cold-start scenarios where a long trace of decisions have not yet been saved in the system.


However, manually inducing new rules from large, incoming batches of data would be a laborious task for a human. As such, rule induction is a machine-learning technique that automates the process of extracting rules from a given set of observations. Various algorithms may be used to determine the relationship between the attributes or features in the dataset and the labels which represent the decisions.


Rule induction typically extracts a combination of conditions (if..then…else statements) from the features that strongly influence a particular decision. Besides the laborious task of manually inducing new rules, machines are able to detect implicit factors in data that are difficult for a human to detect. While humans are capable of extracting high-level features from data, machines are capable of extracting both high-level and low-level features that humans typically miss, thereby providing a more precise means of deriving appropriate rules in a system.


With the help of a decently sized volume of past environment data and associated human decisions, various Machine Learning algorithms may be executed to extract patterns from this data. These extracted patterns will form the rules induced from the data of human operation decisions.



The concept may be represented as a supervised Machine Learning classification problem where the system’s environment variables serve as input features and the human operator decision serves as the target class. The aim of the algorithm would be to find the right conditions of the environment variables that predict appropriate decisions with high accuracy and precision. Traditional rule engines generate conditional statements, and so a good place to start a rule induction system is a decision tree used in traditional supervised Machine Learning. Decision trees are trained on input data to recursively parse the data and extract subsets of the data based on the features.


A tree-like structure is generated where multiple nested conditions split the data down to a specific target class. Decision trees naturally derive conditional “if…then…else” statements that are present in standard rule induction procedures, and so the conditions extracted from a decision tree are naturally more interpretable than what a neural network would produce. However, a caveat of this approach is that a large decision tree may potentially generate long, nested conditional statements which pose difficulty when comparing these conditions to those predefined by a human which are naturally less complex.


The converse approach of using Deep Learning with the aid of a supervised neural network is a possible option, but it introduces increased complexity in terms of reduced interpretability. A neural network is trained on data to optimize weights for input features that produce high classification accuracy. Weights contain implicit rules for how a feature relates to a class, but these rules are less intuitive than conditional statements as they are not explicitly understood by humans. As such, it would be more difficult to induce a new rule with these weights unless post-hoc analysis was performed which introduces an additional layer of intricacy. Hence, the use of traditional machine learning approaches such as decision trees appears to be the more ideal choice.


Bridging the Gap between Human-Defined and Machine-Extracted Rules

Herein lies the challenge of comparing newly extracted rules to those defined by a human in the default rule engine due to differing formats. A solution is to standardize the rule definition by establishing a Data Transfer Object (DTO) that establishes how rules must be interpreted for both human-defined and machine-extracted conditions. The system should implement a method to parse the extracted conditions and fit them to the DTO. Having a standardized format allows for easier comparison to see if an extracted condition does not yet exist in the default rule engine.


While the goal of the proposed solution is to automate the rule induction process, human oversight remains critical. A senior expert needs to review newly induced rules to ensure that they have not been corrupted by mistakes of the human operator. This precautionary measure maintains integrity of the decision support system. Another essential factor to consider is the volume of new rules. It is computationally and operationally inefficient to update the engine for every new decision. It is more effective to batch a significant volume of new verified decisions before inducing new rules and updating the engine.


Decision making support is expected to become a critical element in the defense industry. The evolution of Artificial Intelligence offers a contemporary and automated means of analysing and comparing decisions in a system, including those suggested by its rule engine and those actually made by a human operator. Machine learning rule induction provides a more automated and accurate means for a system to update its rules based on a large influx of human operator decisions.


Entities that deliver military or crisis management solutions must carefully select the correct algorithms, models and techniques that validate, induce and update the appropriate rules for its decision support system.


The goal is twofold. One goal is to detect when human operators themselves are making mistakes despite correct rules. The other goal is to detect when existing rules are outdated because human operators are consistently making different decisions. In either case, AI-driven rule induction offers a powerful tool for auditing and strengthening rule-based systems.


Mikhaile Collins - SKIOS

 
 
 

Comentários


bottom of page