Àâòîð: Sindhu V. Raghavan
Èñòî÷íèê: Sindhu V. Raghavan Bayesian Abductive Logic Programs:A Probabilistic Logic for Abductive Reasoning / The University of Texas at Austin, Austin, TX 78712, USA, http://ijcai.org/papers11/Papers/IJCAI11-492.pdf ...
In this proposal, we introduce Bayesian Abductive Logic Programs (BALP), a probabilistic logic that adapts Bayesian Logic Programs (BLPs) for abductive reasoning. Like BLPs, BALPs also combine first-order logic and Bayes nets. However, unlike BLPs, which use deduction to construct Bayes nets, BALPs employ logical abduction. As a result, BALPs are more suited for problems like plan/activity recognition that require abductive reasoning. In order to demonstrate the efficacy of BALPs, we apply it to two abductive reasoning tasks – plan recognition and natural language understanding.
Abduction, also called abductive reasoning is defined as the process of finding the best explanation for a set of observations [Peirce, 1958]. It is widely used in tasks such as activity recognition, plan recognition, and diagnosis, that require in ferring cause from effect. For example, in plan recognition, given a set of actions performed by an intelligent agent, abductive reasoning is used to identify the high level plan or goal of the agent. Similarly, in medical diagnosis, given a set of symptoms observed in a patient, abductive reasoning is used to diagnose the disease that caused the symptoms. Several other tasks like natural language understanding, image interpretation, etc. could be solved using abductive reasoning. Abductive reasoning is hypothetical in nature, i.e one could only hypothesize the cause for the observed effects. Further, there could be several possible explanations for the same set of observations. As a result, it is considered to be a weak kind of inference [Kakas et al., 1993].
Most previous approaches to abductive reasoning have been based on first-order logic and determine a small set of assumptions sufficient to deduce the observations [Pople, 1973; Levesque, 1989; Kakas et al., 1993]. While these logic-based approaches handle structured representations, they are unable to handle uncertainty in the observations or background knowledge and are incapable of estimating the likelihood of alternative explanations. Another popular approach to abduction involves using Bayes nets to compute the posterior probability of alternative explanations given the observations [Pearl, 1988]. A major limitation of this approach is that it cannot handle structured representations involving relations amongst multiple entities since Bayes nets are essentially propositional in nature.
Recently, a number of formalisms that integrate first-order logic and probabilistic graphical models have been developed in the area of statistical relational learning (SRL)[Getoor and Taskar, 2007]. Of these formalisms, Markov Logic Networks (MLNs) [Richardson and Domingos, 2006], which combine first-order logic and undirected graphical models (Markov nets) have been used for abductive plan recognition by Kate and Mooney [2009 ]. Since MLNs employ deduction for logical inference, they adapt MLNs for abduction by adding reverse implications for every rule in the knowledge base. However, the addition of these rules increases the size and complexity of the MLN, resulting in a computationally expensive model. Like MLNs, most SRL formalisms use deduction for logical inference, and hence cannot be used effectively for abductive reasoning. As a result, my research is focussed on the development of a SRL formalism that can perform abductive reasoning efficiently.
In this proposal, we propose Bayesian Abductive Logic Programs (BALP), a new formalism that adapts Bayesian Logic Programs (BLPs) [Kersting and De Raedt, 2001] for abductive reasoning. Like most SRL formalisms, BLPs also use deduction for logical inference, and hence cannot be used effectively for abductive reasoning. As a result, we enhance BLPs to support abductive reasoning by employing logical abduction instead of deduction. Like BLPs, BALPs combine first-order logic and directed graphical models (Bayes nets). Like all SRL formalisms, BALPs also integrate the strengths of both first-order logic and probabilistic graphical models, thus overcoming the limitations of the approaches mentioned above. The contributions of my Ph.D thesis are as follows:
My progress till date includes development of the BALP framework for abductive reasoning and experimental evaluation of the developed framework on the task of plan recognition.
Bayesian Abductive Logic Programs (BALP) enhance BLPs with logical abduction. In an abductive task like plan recognition, the known facts are insufficient to support the derivation of deductive proof trees for the requisite queries. By using abduction, missing literals can be assumed in order to complete the proof trees needed to determine the structure of the ground network. We derive a set of most-specific abductive proof trees for the given observations using the method originally proposed by Stickel [1988 ]. The resulting abductive proof trees are then used to build the structure of the ground Bayes net using the standard approach for BLPs. We then learn the parameters of the BALP framework using the EM algorithm adapted for BLPs by Kersting and De Raedt [2008 ]. For probabilistic inference, we use off-the shelf approaches [Pearl, 1988; Gogate and Dechter, 2007] developed for Bayesian networks.
The task of plan recognition involves identifying the high level plan of an agent based on its actions. We applied BALPs to the task of plan recognition and compared its performance with existing approaches for plan recognition. In the experimental evaluation performed on three plan-recognition datasets, we compared BALPs to plan- recognition approaches that belonged to different categories (first-order logic, statistical learning, SRL). We found that BALPs consistently outperformed all these approaches, thus proving that they are effective for plan recognition.
Going forward, we would like to extend BALPs for the task of natural language understanding. In this context, we apply BALPs to perform deeper inference to improve the performance of a question answering system. An information extraction (IE) system is capable of extracting only those facts that are explicitly stated in the text. However, for some queries, the question answering system is required to perform deeper inference based on the facts that are stated explicitly in the text. We would like to apply BALPs to perform this kind of deeper inference. However, we notice that this inference does not always have to be abductive in nature. In some cases, a combination of both abductive and deductive inference is required to answer certain queries. Therefore, we would like to enhance BALPs to be able to perform both abductive and deductive inference for natural language understanding.
Learning a model for any SRL formalism involves two tasks - learning the structure (first-order clauses), and learning the parameters. For the task of plan-recognition, the structure of the model can be developed manually either based on inputs from an expert or based on the planning domain developed for the planner. However, it is not trivial to manually develop the set of first-order clauses for a natural language understanding system due to the vastness of the domain. For future work, we focus on the problem of learning the structure of the model for the task of natural language understanding.