Explainable AI - justifiable decision making
One of the benefits of using a rules-based system, such as VisiRule, is the ability of the system to provide explanations and justifications as how it reached a decision, why a question is being asked, what a term means and more. For AI to gain the trust and confidence of the public, it must be able to deliver justifiable decision making. You can read about Google's new Explainable AI Service.
What is a Good Explanation?
A good explanation can explain how a decision was reached using language which is 'compatible' with the person receiving the explanation. The explanation combines facts, rules and inferences of an appropriate complexity.
Who is the Explanation for?
There are often multiple audiences for an explanation; end-users vs operators vs managers. So when explaining why a system denied a loan to a consumer, we might use statements such as 'Your employment record has several gaps' or 'Your income seems too low to make the repayments'; whereas, we might use different terms when explaining the same decision to an agent or broker.
When are we providing the Explanation?
We mostly want to explain how a decision was reached, say why a loan application was accepted or rejected. To do this we could look to identify which tests failed or succeeded and report on any calculations which failed.
We may also want to provide explanations and help to the user, to help them better understand why they are being asked a certain question, or what exactly it means.
What about Background Information?
The system needs to be able to explain basic terms and the meanings of technical phrases. These can be loaded from an FAQ-styled KB.
How does VisiRule Support Explanations?
VisiRule allows you to attach text and text-oriented functions, as well as images and videos and links, to all the principal decision points in a chart, i.e. questions, answers, expressions. These can then be retrieved and used to help provide suitable explanations.
How, What, Why and When?
VisiRule supports various interlocutors; and, in addition, developers can designate their own. The interlocutors help shape the content to be used when providing the explanation.
Visual Explanations: Text vs Graphics vs Video
Explanations in VisiRule are not limited to text. They can be visual, say photographs of equipment or human body parts, or video-based.
VisiRule can display the path or trail thru the chart as a way of conveying how a decision was reached, graphically.
You can deliver explanations using a ChatBot conversation using interlocutors.
A chatbot is a computer program which conducts a conversation via text. Chatbots are typically used in dialog systems for various practical purposes including customer service or information acquisition. Some chatbots use sophisticated natural language processing systems, but most simply scan for keywords within the input, then pull a reply with the most matching keywords, or the most similar wording pattern, from a database. Chatbots have been associated with AI ever since the Eliza first appeared in the 60's, some people actually mistook her for human. Eliza also introduced the personal pronoun transformations common to Alice and many other bots. "Tell me what you think about me" is transformed into "You want me to tell you what I think about you?" creating the illusion of understanding.
Expert System Rules
Expert Systems use rules to replicate the behaviour of human experts. Rules can come from experts who can also provide in-depth explanations as to why a rule is appropriate, what it represents and how it affects the outcome. When rules are induced from data, all we typically have are the counts or statistics generated by the algorithm used.
It is important to gain the trust of all those in the AI process. DARPA recently announced a $2b initiative to help gain trust. DARPA Director, Steve Walker stated: 'What we’re trying to do with explainable AI is have the machine tell the human ‘here’s the answer, and here’s why I think this is the right answer’ and explain to the human being how it got to that answer”.
The LPA AI Technology Stack
At the technical level, the execution mechanism VisiRule mainly uses is backward-chaining inferencing with argument de-referencing. Forward-chaining is supported through the business rules layer below VisiRule, namely Flex. Fuzzy Logic and formal treatments of Uncertainty Handling are supported in Flint.
LPA has also recently developed VisiRule FastChart which enables charts to be 'mined' from historical data using PMML-based Decision Trees. In this way, you can use machine learning to create your VisiRule charts.
There is scope within VisiRule for Machine Learning using feedback loops, as charts themselves can be viewed as data structures by higher level programs. Work on this has recently begun and will lead to charts which adjust themselves and improve their suitability and performance over time based on performance analytics.