Paolo Liberatore's Publications

Belief Revision

Semantics of Belief Revisions

Contrazione e clausole complete
Paolo Liberatore
In Giornata di lavoro su ragionamento non monotono, 1994.

Abstract. In questo lavoro si vedranno nuove motivazioni a favore di uno studio della contrazione su basi finite (non chiuse), oltre a quelle già note in letteratura, e si mostrerà un algoritmo efficiente su base finita.

Arbitraggio
Paolo Liberatore
AI*IA notizie, 7(1):9-12, 1995.

Abstract. Il concetto che sta alla base della assiomatizzazione della revisione è quello di cambiamento minimo (minimal change), ossia che alfa * beta deve venire ottenuto da alfa operando una modifica che rende vero beta, in modo tale che questo cambiamento sia il più piccolo possibile. Ci possono poi essere diversi criteri per valutare la minimalità di un cambiamento: in ogni caso se alfa & beta è consistente allora alfa * beta = alfa & beta, perché in questo modo sto solo aggiungendo beta alla vecchia base senza fare altre modifiche. Nell'arbitraggio il concetto guida è quello del massimo mantenimento della conoscenza, ossia quello di fare in modo che il risultato dell'arbitraggio di due basi mantenga quanta più informazione possibile delle due. Inoltre, è possibile pensare di fare in modo da escludere che le informazioni di una base che sono molto in contrasto con quelle dell'altra, vengano escluse. In questo modo, si suppone che ognuna delle due formule di partenza abbia in sé una ''parte di verità'', e che questo sia verosimilmente l'informazione su cui esse si ''somigliano''.

Arbitration: A commutative operator for belief revision
Paolo Liberatore and Marco Schaerf
In Proceedings of the Second World Conference on the Fundamentals of Artificial Intelligence (WOCFAI'95), pages 217-228. Angkor Press, 1995.  
FTP download 
HTTP download

Abstract. The ability of Database Systems to cope with changing situations and information coming from different sources is crucial for their applicability in real-world scenarios. Recent work in the field of belief revision and update has provided us tecniques to handle change. In this paper we introduce a different form of revision aiming at capturing the process of ''merging'' possibly inconsistent pieces of information. We call this process arbitration. Along the lines of Gardenfors' work, we propose a set of postulates for this operator and prove a representation theorem.

Arbitration (or how to merge knowledge bases)
Paolo Liberatore and Marco Schaerf
IEEE Transactions on Knowledge and Data Engineering, 10(1):76-90, 1998.  
HTTP download

Abstract. Knowledge-based systems must be able to ''intelligently'' manage a large amount of information coming from different sources and at different moments in time. Intelligent systems must be able to cope with a changing world by adopting a ''principled'' strategy. Many formalisms have been put forward in the AI and DB literature to address this problem. Among them, belief revision is one of the most successful frameworks to deal with dynamically changing worlds. Formal properties of belief revision have been investigated by Alchourron, Gärdenfors and Makinson, who put forward a set of postulates stating the properties that a belief revision operator should satisfy. Among these properties, a basic assumption of revision is that the new piece of information is totally reliable and, therefore, must be in the revised knowledge base. Different principles must be applied when there are two different sources of information and each one has a different view of the situation, the two views contradicting each other. If we do not have any reason to consider any of the sources completely unreliable, the best we can do is to ''merge'' the two views in a new and consistent one, trying to preserve as much information as possible. We call this merging process arbitration. In this paper we investigate the properties that any arbitration operator should satisfy. In the style of Alchourron, Gärdenfors and Makinson we propose a set of postulates, analyze their properties and propose actual operators for arbitration.

BReLS: A system for the integration of knowledge bases
Paolo Liberatore and Marco Schaerf
In Proceedings of the Seventh International Conference on Principles of Knowledge Representation and Reasoning (KR 2000), pages 145-152, 2000.  
FTP download 
HTTP download

Abstract. The process of integrating knowledge coming from different sources has been widely investigated in the literature. Three distinct conceptual approaches to this problem have been most succesful: belief revision, merging and update. In this paper we present a framework that integrates these three approaches. In the proposed framework all three operations can be performed. We provide an example that can only be solved by applying more than one single style of knowledge integration and, therefore, cannot be addressed by anyone of the approaches alone. The framework has been implemented, and the examples shown in this paper (as well as other examples from the belief revision literature) have been successfully tested.

A framework for belief update
Paolo Liberatore
In Proceedings of the Seventh European Workshop on Logics in Artificial Intelligence (JELIA 2000), pages 361-375, 2000.  
HTTP download 
HTTP download

Abstract. In this paper we show how several different semantics for belief update can be expressed in a framework for reasoning about actions. This framework can therefore be considered as a common core of all these update formalisms, thus making it clear what they have in common. This framework also allows expressing scenarios that are problematic for the classical formalization of belief update.

Merging locally correct knowledge bases: A preliminary report
Paolo Liberatore
Technical Report cs.AI/0212053, Computing Research Repository (CoRR), 2002.  
HTTP download

Abstract. Belief integration methods are often aimed at deriving a single and consistent knowledge base that retains as much as possible of the knowledge bases to integrate. The rationale behind this approach is the minimal change principle: the result of the integration process should differ as less as possible from the knowledge bases to integrate. We show that this principle can be reformulated in terms of a more general model of belief revision, based on the assumption that inconsistency is due to the mistakes the knowledge bases contain. Current belief revision strategies are based on a specific kind of mistakes, which however does not include all possible ones. Some alternative possibilities are discussed.

Revision by history
Paolo Liberatore
Journal of Artificial Intelligence Research, 52:287-329, 2015.

Abstract. This article proposes a solution to the problem of obtaining plausibility information, which is necessary to perform belief revision: given a sequence of revisions, together with their results, derive a possible initial order that has generated them; this is different from the usual assumption of starting from an all-equal initial order and modifying it by a sequence of revisions. Four semantics for iterated revision are considered: natural, restrained, lexicographic and reinforcement. For each, a necessary and sufficient condition to the existence of an order generating a given history of revisions and results is proved. Complexity is proved coNP complete in all cases but one (reinforcement revision with unbounded sequence length).

Complexity of Belief Revision

The complexity of model checking for belief revision and update
Paolo Liberatore and Marco Schaerf
In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI'96), pages 556-561. AAAI Press/The MIT Press, 1996.  
FTP download 
HTTP download

Abstract. One of the main challenges in the formal modeling of common-sense reasoning is the ability to cope with the dynamic nature of the world. Among the approaches put forward to address this problem are belief revision and update. Given a knowledge base T, representing our knowledge of the ''state of affairs'' of the world of interest, it is possible that we are lead to trust another piece of information P, possibly inconsistent with the old one T. The aim of revision and update operators is to characterize the revised knowledge base T' that incorporates the new formula P into the old one T while preserving consistency and, at the same time, avoiding the loss of too much information in this process. In this paper we study the computational complexity of one of the main computational problems of belief revision and update: deciding if an interpretation M is a model of the revised knowledge base.

The complexity of iterated belief revision
Paolo Liberatore
In Proceedings of the Sixth International Conference on Database Theory (ICDT'97), pages 276-290, 1997.  
FTP download 
HTTP download

Abstract. In this paper we analyze the complexity of revising a knowledge base when an iteration of this process is necessary. The analysis concerns both the classical problems of belief revision (inference, model checking, computation of the new base) and new issues, related to the problem of committing the changes.

The complexity of belief update
Paolo Liberatore
In Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence (IJCAI'97), pages 68-73, 1997.  
FTP download 
HTTP download

Abstract. Belief revision and belief update are two different forms of belief change, and they serve different purposes. In this paper we focus on belief update, the formalization of change in beliefs due to changes in the world. The complexity of the basic update (introduced by Winslett [1990]) has been proved by Eiter and Gottlob (1991). Since then, many other formalizations have been proposed to overcome the limitations and drawbacks of Winslett's update. In this paper we analyze the complexity of the proposals presented in the literature, and relate some of them to previous work on closed world reasoning.

Belief revision and update: Complexity of model checking
Paolo Liberatore and Marco Schaerf
Journal of Computer and System Sciences, 62(1):43-72, 2001.  
HTTP download

Abstract. One of the main challenges in the formal modeling of common-sense reasoning is the ability to cope with the dynamic nature of the world. Among the approaches put forward to address this problem are belief revision and update. Given a knowledge base T, representing our knowledge of the ''state of affairs'' of the world of interest, it is possible that we are lead to trust another piece of information P, possibly inconsistent with the old one T. The aim of revision and update operators is to characterize the revised knowledge base T' that incorporates the new formula P into the old one T while preserving consistency and, at the same time, avoiding the loss of too much information in this process. In this paper we study the computational complexity, in the propositional case, of one of the main computational problems of belief revision and update: deciding if an interpretation M is a model of the revised knowledge base.

The complexity of belief update
Paolo Liberatore
Artificial Intelligence, 119(1-2):141-190, 2000.

Abstract. Belief revision and belief update are two different forms of belief change, and they serve different purposes. In this paper we focus on belief update, the formalization of change in beliefs due to changes in the world. The complexity of the basic update (introduced by Winslett) has been determined by Eiter and Gottlob. Since then, many other formalizations have been proposed to overcome the limitations and drawbacks of Winslett's update. In this paper we analyze the complexity of the proposals presented in the literature: the standard semantics by Winslett, the minimal change with exception and the minimal change with maximal disjunctive information by Zhang and Foo, the update with disjunctive function by Herzig, the abduction-based update and the generalized update by Boutilier. We relate some of these approaches to belief update to previous work on closed world reasoning.

Compact Representations and Belief Revision

The size of a revised knowledge base
Marco Cadoli, Francesco M. Donini, Paolo Liberatore, and Marco Schaerf
In Proceedings of the Fourteenth ACM SIGACT SIGMOD SIGART Symposium on Principles of Database Systems (PODS'95), pages 151-162. ACM Press and Addison Wesley, 1995.  
FTP download 
HTTP download

Abstract. In this paper we address a specific computational aspect of belief revision: The size of the propositional formula obtained by means of the revision of a formula with a new one. In particular, we focus on the size of the smallest formula equivalent to the revised knowledge base. The main result of this paper is that not all formalizations of belief revision are equal from this point of view. For some of them we show that the revised knowledge base can be expressed with a formula admitting a polynomial-space representation (we call these results ''compactability'' results). On the other hand we are able to prove that for other ones the revised knowledge base does not always admit a polynomial-space representation, unless the polynomial hierarchy collapses at a sufficiently low level (''non-compactability'' results). The time complexity of query answering for the revised knowledge base has definitely an impact on being able to represent the result of the revision compactly. Nevertheless formalisms with the same complexity may have different compactability properties.

Compact representations of revision of Horn clauses
Paolo Liberatore
In Proceedings of the Eighth Australian Joint Artificial Intelligence Conference (AI'95), pages 347-354, 1995.  
FTP download 
HTTP download

Abstract. Several methods are proposed, in the past years, to attempt to define how to deal with a dynamic knowledge. From a computational point of view, different definitions lead to different different computational property, namely to a complexity between DELTAp2 to DELTAp3 in the polynomial hierarchy. Restricting the analysis to cases in which some operations becomes tractable (Horn case), other problems arises. We show that, providing some assumption on the polynomial hierarchy, many update operators are uncompactable, i.e. it is impossible to find a representation of the result without an exponential-size explosion. We supply also the explicit representations of the updates for which it is possible.

The size of a revised knowledge base
Marco Cadoli, Francesco M. Donini, Paolo Liberatore, and Marco Schaerf
Artificial Intelligence, 115(1):25-64, 1999.  
HTTP download

Abstract. In this paper we address a specific computational aspect of belief revision: the size of the propositional formula obtained by means of the revision of a formula with a new one. In particular, we focus on the size of the smallest formula which is logically equivalent to the revised knowledge base. The main result of this paper is that not all formalizations of belief revision are equal from this point of view. For some of them we show that the revised knowledge base can be represented by a polynomial-size formula (we call these results ''compactability'' results). On the other hand, for other ones the revised knowledge base does not always admit a polynomial-space representation, unless the polynomial hierarchy collapses at a sufficiently low level (''non-compactability'' results). We also show that the time complexity of query answering for the revised knowledge base has definitely an impact on being able to represent the result of the revision compactly. Nevertheless, formalisms with the same complexity may have different compactability properties. We also study compactability properties for a weaker form of equivalence, called query equivalence, which allows to introduce new propositional symbols. Moreover, we extend our analysis to the special case in which the new formula has constant size and to the case of sequences of revisions (i.e., iterated belief revision). A complete analysis along these four coordinates is shown.

Compilability and compact representations of revision of Horn knowledge bases
Paolo Liberatore
ACM Transactions on Computational Logic, 1(1):131-161, 2000.  
HTTP download

Abstract. Several methods have been proposed as an attempt to deal with dynamically-changing scenarios. From a computational point of view, different formalisms have different computational properties. In this paper we consider knowledge bases represented as sets of Horn clauses. The importance of this case is twofold: first, inference is polynomial, thus tractable; second, Horn clauses represent causal relations between facts, thus they are of great practical importance, although not all propositional knowledge bases can be represented in Horn form. The complexity of Horn revision is still high, and in some cases coincides with the complexity of the general (non-Horn) case. We analyze the complexity of belief revision from the point of view of the compilation: we study the possibility of reducing the complexity by allowing a (possibly expensive) preprocessing of part of the input of the problem. Extending the work by Cadoli et al. [1999], we consider the problem of compact representation of revision in the Horn case, that is, given a knowledge base T and an update P (both represented by Horn clauses) decide whether T * P, the result of the revision, can be represented with a propositional formula whose size is polynomial in the size of T and P. We give this representation for all formalisms for which it exists, and we show that the existence of a compact representation is related to the possibility of decreasing the complexity of a formalism via a preprocessing.

The compactness of belief revision and update operators
Paolo Liberatore and Marco Schaerf
Fundamenta Informaticae, 62(3-4):377-393, 2004.

Abstract. A propositional knowledge base can be seen as a compact representation of a set of models. When a knowledge base T is updated with a formula P, the resulting set of models can be represented in two ways: either by a theory T' that is equivalent to T*P or by the pair (T,P). The second representation can be super-polinomially more compact than the first. In this paper, we prove that the compactness of this representation depends on the specific semantics of *, e.g., Winslett's semantics is more compact than Ginsberg's.

Non-Monotonic Reasoning

Relating belief revision and circumscription
Paolo Liberatore and Marco Schaerf
In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI'95), pages 1557-1563. Morgan Kaufmann, Los Altos, 1995.  
FTP download 
HTTP download

Abstract. Nonmonotonic formalisms and belief revision operators have been introduced as useful tools to describe and reason about evolving scenarios. Both approaches have been proven effective in a number of different situations. However, little is known about their relationship. Previous work by Winslett has shown some correlations between a specific operator and circumscription. In this paper we greatly extend Winslett's work by establishing new relations between circumscription and a large number of belief revision operators. This highlights similarities and differences between these formalisms. Furthermore, these connections provide us with the possibility of importing results in one field into the other one.

Reducing belief revision to circumscription (and vice versa)
Paolo Liberatore and Marco Schaerf
Artificial Intelligence, 93(1-2):261-296, 1997.  
HTTP download

Abstract. Nonmonotonic formalisms and belief revision operators have been introduced as useful tools to describe and reason about evolving scenarios. Both approaches have been proven effective in a number of different situations. However, little is known about their relationship. Previous work by Winslett has shown some correlations between a specific operator and circumscription. In this paper we greatly extend Winslett's work by establishing new relations between circumscription and a large number of belief revision operators. This highlights similarities and differences between these formalisms. Furthermore, these connections provide us with the possibility of importing results in one field into the other one.

The complexity of model checking for propositional default logics
Paolo Liberatore and Marco Schaerf
In Proceedings of the Thirteenth European Conference on Artificial Intelligence (ECAI'98), pages 18-22. John Wiley Press, 1998.  
FTP download 
HTTP download

Abstract. Default logic is one of the most widely used formalisms to formalize commonsense reasoning. In this paper we analyze the complexity of deciding whether a propositional interpretation is a model of a default theory for some of the variants of default logic presented in the literature. We prove that all the analyzed variants have the same complexity and that this problem is in general Sigma2 complete, while it is CoNP complete under some restrictions on the form of the defaults.

Uncontroversial default logic
Paolo Liberatore
In Proceedings of the Fifteenth European Conference on Artificial Intelligence (ECAI 2002), pages 526-530, 2002.  
HTTP download

Abstract. Many variants of default logics exist. Two of the main differences among them are the choice between local or global consistency and the choice of accepting maximally successful sets of defaults. Proving a result that is valid in all variants amounts to showing either a proof that holds for all semantics or a different proof for each semantics. In this paper, we characterize theories that do not depend at all on what makes the semantics different. A result that is proved on such theories holds not only for all considered semantics, but also on any other semantics that differs on the classical one because of the two choices. These theories are also of interest for practical applications of default logic, as an implemented system should be able to detect (for example, to warn the user) any theory whose semantics is debatable.

Uncontroversial default logic
Paolo Liberatore
Journal of Logic and Computation, 14(5):747-765, 2004.

Abstract. Many variants of default logics exist. Two of the main differences among them arise from the choice between local and global consistency, and the choice of whether or not to accept maximally successful sets of defaults. In this paper, we characterize theories that do not depend at all on what makes the semantics different, that is, theories for which these two choices do not matter. A result that is proved for such theories holds not only for all the considered semantics, but also for every other semantics that differs from them on the two choices.

Seminormalizing a default theory
Paolo Liberatore
Journal of Applied Non-Classical Logics, 15(3):321-340, 2005.

Abstract. Most of the work in default logic is about default theories that are completely specified. In this category are the proposals of appropriate semantics for default logic, the characterizations of the complexity of reasoning with a default theory, the algorithms for finding consequences of default theories, etc. Relatively little attention has been paid to the process of building a default theory, and most of the work on this topic is about translating knowledge bases from other formalisms (such as circumscription, autoepistemic logic, and action description languages) into default logic. This paper is about expressing knowledge in default logic. In particular, we assume that defaults are initially formulated as normal, and are then corrected using specific inference examples.

Representability in default logic
Paolo Liberatore
Journal of the Interest Group in Pure and Applied Logic, 13(3):335-351, 2005.  
HTTP download

Abstract. A default theory can be seen as a way for representing a set of formulae, i.e., its extensions. In this paper, we characterize the sets of formulae that can be expressed by a default theory according to various semantics: justified, constrained, rational, cumulative, QDL, CADL, and two semantics with priorities. These characterizations imply some non-translatability results between semantics.

The complexity of model checking for propositional default logics
Paolo Liberatore and Marco Schaerf
Data and Knowledge Engineering, 55(2):189-202, 2005.

Abstract. We analyze the complexity of deciding whether a propositional interpretation is a model of a default theory for some of the variants of default logic presented in the literature: Reiter's, justified, constrained, rational, and cumulative. We prove that all the analyzed variants are Sigma2-complete in the general case, and remains so even if all defaults are prerequisite-free and seminormal. Cumulative default logic is also Sigma2-complete even if all defaults are normal and prerequisite-free. The other semantics are Delta-log2-complete and coNP-complete if all defaults are normal and prerequisites are allowed or not, respectively.

Complexity of the unique extension problem in default logic
Xishun Zhao and Paolo Liberatore
Fundamenta Informaticae, 53(1):79-104, 2002.  
HTTP download

Abstract. In this paper we analyze the problem of checking whether a default theory has a single extension. This problem is important for at least three reasons. First, if a theory has a single extension, nonmonotonic inference can be reduced to entailment in propositional logic (which is computationally easier) using the set of consequences of the generating defaults. Second, a theory with many extensions is typically weak i.e., it has few consequences; this indicates that the theory is of little use, and that new information has to be added to it, either as new formulae, or as preferences over defaults. Third, some applications require as few extensions as possible (e.g. diagnosis). We study the complexity of checking whether a default theory has a single extension. We consider the combination of several restrictions of default logics: seminormal, normal, disjunction-free, unary, ordered. Complexity varies from the first to the third level of the polynomial hierarchy. The problem of checking whether a theory has a given number of extensions is also discussed.

On the complexity of extension checking in default logic
Paolo Liberatore
Information Processing Letters, 98(2):61-65, 2006.

Abstract. The complexity of deciding equivalence of a formula to an extension of a default theory is investigated for the Reiter, justified, constrained, and rational semantics.

Consistency defaults
Paolo Liberatore
Studia Logica, 86(1):89-110, 2007.

Abstract. A consistency default is a propositional inference rule that asserts the consistency of a formula in its consequence. Consistency defaults allow for a straightforward encoding of domains in which it is explicitely known when something is possible. The logic of consistency defaults can be seen as a variant of cumulative default logic or as a generalization of justified default logic; it is also able to simulate Reiter default logic in the seminormal case. A semantical characterization of consistency defaults in terms of processes and in terms of a fixpoint equation is given, as well as a normal form.

Bijective faithful translations among default logics
Paolo Liberatore
Journal of Logic and Computation, 24(4):763-807, 2014.

Abstract. In this article, we report results about translations between variants of defaults logics such that the extensions of the theories that are the input and the output of the translation are in a bijective correspondence. We assume that a translation can introduce new variables and that the result of translating a theory can either be produced in time polynomial in the size of the theory or its output is of size polynomial in the size of the theory; we restrict to the case in which the original theory has extensions. This study fills a gap between two previous works, one studying bijective translations among restrictions of default logics and one studying non-bijective translations between default variants of default logic.

Abduction and Diagnosis

Complexity and compilability of diagnosis in system graphs
Paolo Liberatore
In DX-96 Workshop on Theory of Diagnosis, 1996.

Compilability of abduction
Paolo Liberatore and Marco Schaerf
In Proceedings of the Seventeenth National Conference on Artificial Intelligence (AAAI 2000), 2000.  
FTP download 
HTTP download

Abstract. Abduction is one of the most important forms of reasoning and it has been successfully applied to several practical problems such as diagnosis. In this paper we investigate whether the computational complexity of abduction can be reduced by an appropriate use of preprocessing or compilation. This is motivated by the fact that part of the data of the problem (namely, the set of all possible assumptions and the theory relating assumptions and manifestations) are often known before the rest of the problem. We present a detailed analysis of the computational complexity of abduction when compilation is allowed.

Verification programs for abduction
Francesco M. Donini and Paolo Liberatore
In Proceedings of the Fourteenth European Conference on Artificial Intelligence (ECAI 2000), pages 166-170, 2000.  
FTP download 
HTTP download

Abstract. We call verification the process of finding the actual explanation of a given set of manifestations. We consider an abductive setting, in which explanations are sets of assumptions. To filter out erroneous explanations, a verification program should propose which assumptions to check. Given the abductive setting of manifestations, assumptions, and a theory relating them, we study the complexity of providing a minimal set of assumptions to be checked in order to identify the actual explanation. We study also the case in which assumptions to be checked are given in a tree-like order.

Compilability of propositional abduction
Paolo Liberatore and Marco Schaerf
ACM Transactions on Computational Logic, 8(1):2, 2007.  
HTTP download

Abstract. Abduction is one of the most important forms of reasoning; it has been successfully applied to several practical problems such as diagnosis. In this paper we investigate whether the computational complexity of abduction can be reduced by an appropriate use of preprocessing. This is motivated by the fact that part of the data of the problem (namely, the set of all possible assumptions and the theory relating assumptions and manifestations) are often known before the rest of the problem. In this paper, we show some complexity results about abduction when compilation is allowed.

On the complexity of second-best abductive explanations
Paolo Liberatore and Marco Schaerf
International Journal of Approximate Reasoning, 63:22-31, 2015.

Abstract. When looking for a propositional abductive explanation of a given set of manifestations, an ordering between possible solutions is often assumed. While the complexity of computing optimal solutions is already known, in this paper we consider second-best solutions with respect to different orderings, and different definitions of what a second-best solution is: an optimal solution not already found, or a solution that is optimal among the ones not previously found.

Reasoning about Actions and Planning

The complexity of the language A
Paolo Liberatore
Electronic Transactions on Artificial Intelligence, 1(1-3):13-38, 1997. Available at http://www.ep.liu.se/ej/etai/1997/002/.  
HTTP download

Abstract. In this paper we anliyze the complexity of the language A, proposed by Gelfond and Lifschitz (1993) to formalize properties of actions. We prove that the general language is NP complete, thus intractable, and show some tractable (polynomial) subclasses of it. We also show how states that are unreachable affect the semantics of the language.

On non-conservative plan modification
Paolo Liberatore
In Proceedings of the Thirteenth European Conference on Artificial Intelligence (ECAI'98), pages 518-519, 1998.  
FTP download 
HTTP download

Abstract. The problem of minimally modifying a plan in response to changes in the specification of the planning problem has already been investigated in the literature. In this paper we consider the problem of non-minimal modification of plans from a computational point of view. We prove that while the general problem is intractable (that is, replanning from scratch is equally hard), there are scenarios in which the new plan can be built more efficiently (in polynomial time).

The size of MDP factored policies
Paolo Liberatore
In Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI 2002), pages 267-272, 2002.

Abstract. Policies of Markov Decision Processes (MDPs) tell the next action to execute, given the current state and (possibly) the history of actions executed so far. Factorization is used when the number of states is exponentially large: both the MDP and the policy can be then represented using a compact form, for example employing circuits. We prove that there are MDPs whose optimal policies require exponential space even in factored form.

On polynomial sized MDP succinct policies
Paolo Liberatore
Journal of Artificial Intelligence Research, 21:551-577, 2004.  
HTTP download

Abstract. Policies of Markov Decision Processes (MDPs) determine the next action to execute from the current state and, possibly, the history (the past states). When the number of states is large, succinct representations are often used to compactly represent both the MDPs and the policies in a reduced amount of space. In this paper, some problems related to the size of succinctly represented policies are analyzed. Namely, it is shown that some MDPs have policies that can only be represented in space super-polynomial in the size of the MDP, unless the polynomial hierarchy collapses. This fact motivates the study of the problem of deciding whether a given MDP has a policy of a given size and reward. Since some algorithms for MDPs work by finding a succinct representation of the value function, the problem of deciding the existence of a succinct representation of a value function of a given size and reward is also considered.

On the complexity of case-based planning
Paolo Liberatore
Journal of Experimental and Theoretical Computer Science, 17(3):283-295, 2005.

Abstract. This paper analyses the computational complexity of problems related to case-based planning: planning when a plan for a similar instance is known, and planning from a library of plans. It is proven that planning from a single case has the same complexity than generative planning (i.e. planning from scratch); using an extended definition of cases, complexity is reduced if the domain stored in the case is similar to the one to search plans for. Planning from a library of cases is shown to have the same complexity. In both cases, the complexity of planning remains, in the worst case, PSPACE-complete.

Satisfiability

Algorithms and experiments on finding minimal models
Paolo Liberatore
Technical Report 09-99, Dipartimento di Informatica e Sistemistica, Università di Roma ''La Sapienza'', 1999.  
FTP download 
HTTP download

Abstract. In this paper we consider the problem of finding minimal models of a set of propositional clauses. This problem has many applications: for instance, finding minimal-length plans, minimal diagnosis, and solving many other minimization and maximization problems. We have adapted the Davis-Putnam algorithm (that finds generic models of a set of clauses) in order to find minimal models. Experiments have been performed to determine which heuristics can be useful, and to decide in which cases the problem is hard and in which cases it is easy. A comparison with a local search algorithm is also shown.

On the complexity of finding satisfiable subinstances in constraint satisfaction
Peter Jonsson and Paolo Liberatore
Technical Report TR99-038, Electronic Colloquium on Computational Complexity, 1999.  
FTP download

Abstract. We study the computational complexity of an optimization version of the constraint satisfaction problem: given a set F of constraint functions, an instance consists of a set of variables V related by constraints chosen from F and a natural number k. The problem is to decide whether there exists a subset V' of V such that the size of V' is greater or equal than k and the subinstance induced by V' has a solution. For all possible choices of F, we show that this problem is either NP-hard or trivial. This hardness result makes it interesting to study relaxations of the problem which may have better computational properties. Thus, we study the approximability of the problem and we consider certain compilation techniques. In both cases, the results are not encouraging.

On the complexity of choosing the branching literal in DPLL
Paolo Liberatore
Artificial Intelligence, 116(1-2):315-326, 2000.  
HTTP download

Abstract. In this paper we analyze a computational problem raising from the well-known DPLL algorithm for checking satisfiability of propositional formulae. The DPLL algorithm is a backtracking algorithm enhanced with three rules to improve efficency. We study the computational complexity of determining whether a variable leads to an optimal search tree, when it is chosen as the branching literal.

Solving QBF with SMV
Francesco M. Donini, Paolo Liberatore, Fabio Massacci, and Marco Schaerf
In Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Reasoning (KR 2002), pages 578-589, 2002.  
HTTP download

Abstract. The possibility of solving the Quantified Boolean Formulae (QBF) problems using the SMV system is a consequence of two well-known theoretical results: the membership of QBF to PSPACE, and the PSPACE-hardness of LTL (and therefore, of SMV). Nevertheless, such results do not imply the existence of a reduction that is also of practical utility. In this paper, we show a reduction from QBF to SMV that is linear (instead of cubic), and uses a constant-size specification. This new reduction has three applications the previous one has not: first, it allows for solving QBF problems using SMV-like systems, which are now more developed than direct QBF solvers; second, we can use it to verify whether the performance behavior of direct QBF solvers is intrinsic of the problem, or rather an effect of the solving algorithm; third, random hard SMV instances can be easily generated by reduction from QBF hard instances (whose generation method is now established).

Complexity results on DPLL and resolution
Paolo Liberatore
ACM Transactions on Computational Logic, 7(1):84-107, 2006.  
HTTP download

Abstract. DPLL and resolution are two popular methods for solving the problem of propositional satisfiability. Rather than algorithms, they are families of algorithms, as their behavior depend on some choices they face during execution: DPLL depends on the choice of the literal to branch on; resolution depends on the choice of the pair of clauses to resolve at each step. The complexity of making the optimal choice is analyzed in this paper. Extending previous results, we prove that choosing the optimal literal to branch on in DPLL is Delta-log2-hard, and becomes NPˆPP-hard if branching is only allowed on a subset of variables. Optimal choice in regular resolution is both NP-hard and CoNP-hard. The problem of determining the size of the optimal proofs is also analyzed: it is CoNP-hard for DPLL, and Delta-log2-hard if a conjecture we make is true. This problem is CoNP-hard for regular resolution.

Propositional Logic

The complexity of checking redundancy of CNF propositional formulae
Paolo Liberatore
In Proceedings of the Fifteenth European Conference on Artificial Intelligence (ECAI 2002), pages 262-266, 2002.  
HTTP download

Abstract. A knowledge base is redundant if it contains parts that can be inferred from the rest of it. We study the problem of checking whether a CNF formula (a set of clauses) is redundant, that is, it contains clauses that can be derived from the other ones. Any CNF formula can be made irredundant by deleting some of its clauses: what results is an irredundant equivalent subset (I.E.S.) We study the complexity of some related problems: verification, checking existence of a I.E.S. with a given size, checking necessary and possible presence of clauses in I.E.S.'s, and uniqueness.

Conditional independence in propositional logic
Jérôme Lang, Paolo Liberatore, and Pierre Marquis
Artificial Intelligence, 141(1-2):79-121, 2002.  
HTTP download

Abstract. Independence--the study of what is relevant to a given problem of reasoning--is an important AI topic. In this paper, we investigate several notions of conditional independence in propositional logic: Darwiche and Pearl's conditional independence, and some more restricted forms of it. Many characterizations and properties of these independence relations are provided. We show them related to many other notions of independence pointed out so far in the literature (mainly formula-variable independence, irrelevance and novelty under various forms, separability, interactivity). We identify the computational complexity of conditional independence and of all these related independence relations.

k-approximating circuits
Marco Cadoli, Francesco M. Donini, Paolo Liberatore, and Marco Schaerf
Technical Report TR02-067, Electronic Colloquium on Computational Complexity, 2002.  
FTP download

Abstract. In this paper we study the problem of approximating a boolean function using the Hamming distance as the approximation measure. Namely, given a boolean function f, its k-approximation is the function fk returning true on the same points in which f does, plus all points whose Hamming distance from the previous set is at most k. We investigate whether k-approximation generates an exponential increase in size or not, when functions are represented as circuits. We also briefly investigate the increase in the size of the circuit for other forms of approximation.

Propositional independence - formula-variable independence and forgetting
Jérôme Lang, Paolo Liberatore, and Pierre Marquis
Journal of Artificial Intelligence Research, 18:391-443, 2003.  
HTTP download

Abstract. Independence - the study of what is relevant to a given problem of reasoning - has received an increasing attention from the AI community. In this paper, we consider two basic forms of independence, namely, a syntactic one and a semantic one. We show features and drawbacks of them. In particular, while the syntactic form of independence is computationally easy to check, there are cases in which things that intuitively are not relevant are not recognized as such. We also consider the problem of forgetting, i.e., distilling from a knowledge base only the part that is relevant to the set of queries constructed from a subset of the alphabet. While such process is computationally hard, it allows for a simplification of subsequent reasoning, and can thus be viewed as a form of compilation: once the relevant part of a knowledge base has been extracted, all reasoning tasks to be performed can be simplified.

Redundancy in logic I: CNF propositional formulae
Paolo Liberatore
Artificial Intelligence, 163(2):203-232, 2005.

Abstract. A knowledge base is redundant if it contains parts that can be inferred from the rest of it. We study some problems related to the redundancy of a CNF formula. In particular, any CNF formula can be made irredundant by deleting some of its clauses: what results is an irredundant equivalent subset. We study the complexity of problems related to irredundant equivalent subsets: verification, checking existence of an irredundant equivalent subset with a given size, checking necessary and possible presence of clauses in irredundant equivalent subsets, and uniqueness. We also consider the problem of redundancy with different definitions of equivalence.

Redundancy in logic II: 2CNF and Horn propositional formulae
Paolo Liberatore
Artificial Intelligence, 172(2-3):265-299, 2008.

Abstract. We report results about the redundancy of formulae in 2CNF form. In particular, we give a slight improvement over the trivial redundancy algorithm and give some complexity results about some problems related to finding Irredundant Equivalent Subsets (I.E.S.) of 2CNF formulae. The problems of checking whether a 2CNF formula has a unique I.E.S. and checking whether a clause in is all its I.E.S.'s are polynomial. Checking whether a 2CNF formula has an I.E.S. of a given size and checking whether a clause is in some I.E.S.'s of a 2CNF formula are polynomial or NP-complete depending on whether the formula is cyclic. Some results about Horn formulae are also reported.

Redundancy in logic III: Non-mononotonic reasoning
Paolo Liberatore
Artificial Intelligence, 172(11):1317-1359, 2008.

Abstract. Results about the redundancy of certain versions of circumscription and default logic are presented. In particular, propositional circumscription where all variables are minimized and skeptical default logics are considered. This restricted version of circumscription is shown to have the unitary redundancy property: a CNF formula is redundant (it is equivalent to one of its proper subsets) if and only if it contains a redundant clause (it is equivalent to itself minus one clause); default logic does not have this property in general. We also give the complexity of checking redundancy in the considered formalisms.

Preprocessing and Compact Representations

Feasibility and unfeasibility of off-line processing
Marco Cadoli, Francesco M. Donini, Paolo Liberatore, and Marco Schaerf
In Proceedings of the Fourth Israeli Symposium on Theory of Computing and Systems (ISTCS'96), pages 100-109. IEEE Computer Society, 1996.  
FTP download 
HTTP download

Abstract. We formally investigate the idea of processing off-line part of the input data in order to speed up on-line computing. In particular, we focus on off-line processing for intractable decision problems. To this end, we define new complexity classes and reductions, and find complete problems.

Comparing space efficiency of propositional knowledge representation formalisms
Marco Cadoli, Francesco M. Donini, Paolo Liberatore, and Marco Schaerf
In Proceedings of the Fifth International Conference on the Principles of Knowledge Representation and Reasoning (KR'96), pages 364-373. Morgan Kaufmann, Los Altos, 1996.  
FTP download 
HTTP download

Abstract. We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Informally, the space efficiency of a formalism F in representing a certain piece of knowledge alfa, is the size of the shortest formula of F that represents alfa. In this paper we assume that knowledge is either a set of propositional interpretations or a set of formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators.

On the compilability of diagnosis, planning, reasoning about actions, belief revision, etc.
Paolo Liberatore
In Proceedings of the Sixth International Conference on Principles of Knowledge Representation and Reasoning (KR'98), pages 144-155, 1998.  
FTP download 
HTTP download

Abstract. In this paper we investigate the usefulness of preprocessing part of the input of a given problem to improve the efficiency. We extend the results of [Cadoli et al. ISTCS, 1996] by giving sufficient conditions to prove the unfeasibility of reducing the on-line complexity via an off-line preprocessing. We analyze the problems of diagnosis, planning, reasoning about actions, and belief revision. We analyze other problems from various fields.

Compilation of intractable problems and its application to artificial intelligence
Paolo Liberatore
PhD thesis, Dipartimento di Informatica e Sistemistica, Università di Roma ''La Sapienza'', 1998.  
FTP download 
HTTP download

Abstract. Intractable problems are hard to solve. Since the work ''has to be done'', various solutions have been developed. In this dissertation we concentrate on compilation. We have an intractable problem, such that each instance is composed of two parts, one (called the fixed part) is known in advance, while the other (called the varying part) only comes at execution time. Our idea is to solve the problem in two steps: 1. Take the fixed part and compile it into a new data structure; 2. Take the new data structure and the varying part, and produce the output. If the second step can be accomplished in polynomial time, the problem is said to be compilable to polynomial time. The dissertation contains a formal definition of compilation and compilabil- ity of hard problems. Namely, two hierarchies of classes of problems (similar to the polynomial hierarchy) are defined. One of them is used to prove that problems are compilable, while the other one is mainly used to prove that some problems are not compilable. Applications of this framework includes: compilability of AI problems, problems on graphs, compact representation of AI operators. A comparison with the non-uniform polynomial hierarchy and with the FPT classes are given.

Space efficiency of propositional knowledge representation formalisms
Marco Cadoli, Francesco M. Donini, Paolo Liberatore, and Marco Schaerf
Journal of Artificial Intelligence Research, 13:1-31, 2000.  
HTTP download

Preprocessing of intractable problems
Marco Cadoli, Francesco M. Donini, Paolo Liberatore, and Marco Schaerf
Information and Computation, 176(2):89-120, 2002.  
FTP download 
HTTP download

Abstract. Some computationally hard problems -e.g., deduction in logical knowledge bases- are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is useful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones their compilability imply some consequences that are considered unlikely. Our primary goal is to provide a sound methodology that can be used to either prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, ''completeness'' meaning they are ''the less likely to be compilable''. We also investigate preprocessing that does not yield polynomial-time on-line algorithms, but generically ''decreases'' complexity. This leads us to define ''hierarchies of compilability'', that are the analog of the polynomial hierarchy. A detailed comparison of our framework to the idea of ''parameterized tractability'' shows the differences between the two approaches.

Monotonic reductions, representative equivalence, and compilation of intractable problems
Paolo Liberatore
Journal of the ACM, 48(6):1091-1125, 2001.  
HTTP download

Abstract. The idea of preprocessing part of the input of a problem in order to improve efficiency has been employed by several researchers in several areas of computer science. In this paper we show sufficient conditions to prove that an intractable problem cannot be efficiently solved even allowing an exponentially long preprocessing phase. The generality of such conditions is shown by applying them to various problems coming from different fields. While the results may seem to discourage the use of compilation, we present some evidence that such negative results are useful in practice.

Redundancy in logic I: CNF propositional formulae
Paolo Liberatore
Artificial Intelligence, 163(2):203-232, 2005.

Abstract. A knowledge base is redundant if it contains parts that can be inferred from the rest of it. We study some problems related to the redundancy of a CNF formula. In particular, any CNF formula can be made irredundant by deleting some of its clauses: what results is an irredundant equivalent subset. We study the complexity of problems related to irredundant equivalent subsets: verification, checking existence of an irredundant equivalent subset with a given size, checking necessary and possible presence of clauses in irredundant equivalent subsets, and uniqueness. We also consider the problem of redundancy with different definitions of equivalence.

The size of BDDs and other data structures in temporal logics modelchecking
Andrea Ferrara, Paolo Liberatore, and Marco Schaerf
IEEE Transactions on Computers, 65:3148-3156, 2016.

Abstract. Temporal Logic Model Checking is a verification method in which we describe a system, the model, and then we verify whether important properties, expressed in a temporal logic formula, hold in the system. Many Model Checking tools employ BDDs or some other data structure to represent sets of states. It has been empirically observed that the BDDs used in these algorithms may grow exponentially as the model and formula increase in size. We formally prove that no kind of data structure of polynomial size can represent the set of valid initial states for all models and all formulae. This result holds for all data structures where a state can be checked in polynomial time. Therefore, it holds not only for all types of BDDs regardless of variable ordering, but also for more powerful data structures, such as RBCs, MTBDDs, ADDs and SDDs. Thus, the size explosion of BDDs is not a limit of these specific data representation structures, but is unavoidable: every formalism used in the same way would lead to an exponential size blow up.

Teaching

Complementi ed esercizi di programmazione in PASCAL
Diego Calvanese, Paolo Liberatore, Fabio Massacci, and Riccardo Rosati
Progetto Leonardo. Esculapio Editore, Bologna, 1 edition, 1998. In Italian. Programs available on the web at http://www.dis.uniroma1.it/%7epascal.


Last update: Tuesday, 06-Jun-2023 15:02:16 CEST