Abstract: The inference problem for propositional circumscription is known to
be highly intractable and, in fact, harder than the inference problem for classi-
cal propositional logic. More precisely, in its full generality this problem is P - 2
complete, which means that it has the same inherent computational complexity
as the satisfiability problem for quantified Boolean formulas with two alternations
(universal-existential) of quantifiers. We use Schaefer?s framework of generalized
satisfiability problems to study the family of all restricted cases of the inference
problem for propositional circumscription. Our main result yields a complete clas-
sification of the ?truly hard? ( P -complete) and the ?easier? cases of this problem
2
(reducible to the inference problem for classical propositional logic). Specifically,
we establish a dichotomy theorem which asserts that each such restricted case either
is P -complete or is in coNP. Moreover, we provide efficiently checkable criteria
2
that tell apart the ?truly hard? cases from the ?easier? ones. We show our results both
when the formulas involved are and are not allowed to contain constants. The present
work complements a recent paper by the same authors, where a complete classifi-
cation into hard and easy cases of the model-checking problem in circumscription
was established.
Abstract: We consider the performance of a number of DPLL algorithms on random 3-CNF formulas with n variables and m = rn clauses. A long series of papers analyzing so-called “myopic” DPLL algorithms has provided a sequence of lower bounds for their satisfiability threshold. Indeed, for each myopic algorithm A it is known that there exists an algorithm-specific clause-density, rA , such that if r 2.78 and the same is true for generalized unit clause for all r > 3.1. Our results imply exponential lower bounds for many other myopic algorithms for densities similarly close to the corresponding rA .
Abstract: For various random constraint satisfaction problems there is a significant gap between the largest constraint density
for which solutions exist and the largest density for which any polynomial time algorithm is known to find
solutions. Examples of this phenomenon include random k-SAT, random graph coloring, and a number of other
random Constraint Satisfaction Problems. To understand this gap, we study the structure of the solution space of
random k-SAT (i.e., the set of all satisfying assignments viewed as a subgraph of the Hamming cube). We prove
that for densities well below the satisfiability threshold, the solution space decomposes into an exponential number
of connected components and give quantitative bounds for the diameter, volume and number.
Abstract: For various random constraint satisfaction problems there is a significant gap between
the largest constraint density for which solutions exist and the largest density for which any polynomial
time algorithm is known to find solutions. Examples of this phenomenon include random
k-SAT, random graph coloring, and a number of other random constraint satisfaction problems. To
understand this gap, we study the structure of the solution space of random k-SAT (i.e., the set of
all satisfying assignments viewed as a subgraph of the Hamming cube). We prove that for densities
well below the satisfiability threshold, the solution space decomposes into an exponential number of
connected components and give quantitative bounds for the diameter, volume, and numb
Abstract: A dichotomy theorem for a class of decision problems is a result asserting that certain problems in the
class are solvable in polynomial time, while the rest are NP-complete. The first remarkable such dichotomy
theorem was proved by Schaefer in 1978. It concerns the class of generalized satisfiability problems Sat?S{\L},
whose input is a CNF?S{\L}-formula, i.e., a formula constructed from elements of a fixed set S of generalized
connectives using conjunctions and substitutions by variables. Here, we investigate the complexity of
minimal satisfiability problems Min Sat?S{\L}, where S is a fixed set of generalized connectives. The input to
such a problem is a CNF?S{\L}-formula and a satisfying truth assignment; the question is to decide whether
there is another satisfying truth assignment that is strictly smaller than the given truth assignment with
respect to the coordinate-wise partial order on truth assignments. Minimal satisfiability problems were first
studied by researchers in artificial intelligence while investigating the computational complexity of prop-
ositional circumscription. The question of whether dichotomy theorems can be proved for these problems
was raised at that time, but was left open. We settle this question affirmatively by establishing a dichotomy
theorem for the class of all Min Sat?S{\L}-problems, where S is a finite set of generalized connectives. We also
prove a dichotomy theorem for a variant of Min Sat?S{\L} in which the minimization is restricted to a subset of
the variables, whereas the remaining variables may vary arbitrarily (this variant is related to extensions of
propositional circumscription and was first studied by Cadoli). Moreover, we show that similar dichotomy
theorems hold also when some of the variables are assigned constant values. Finally, we give simple criteria that tell apart the polynomial-time solvable cases of these minimal satisfiability problems from the NP-
complete ones.
Abstract: One of the most challenging problems in probability and complexity theory is
to establish and determine the satisfiability threshold, or phase transition, for
random k-SAT instances: Boolean formulas consisting of clauses with exactly k
literals. As the previous part of the volume has explored, empirical observations
suggest that there exists a critical ratio of the number of clauses to the number
of variables, such that almost all randomly generated formulas with a higher
ratio are unsatisfiable while almost all randomly generated formulas with a lower
ratio are satisfiable. The statement that such a crossover point really exists is
called the satisfiability threshold conjecture. Experiments hint at such a direction,
but as far as theoretical work is concerned, progress has been difficult. In an
important advance, Friedgut [23] showed that the phase transition is a sharp one,
though without proving that it takes place at a “fixed” ratio for large formulas.
Otherwise, rigorous proofs have focused on providing successively better upper
and lower bounds for the value of the (conjectured) threshold. In this chapter, our
Abstract: The problem of determining the unsatisfiability threshold for random 3-SAT formulas consists in determining the clause to variable
ratio that marks the experimentally observed abrupt change from almost surely satisfiable formulas to almost surely unsatisfiable. Up
to now, there have been rigorously established increasingly better lower and upper bounds to the actual threshold value. In this paper,
we consider the problem of bounding the threshold value from above using methods that, we believe, are of interest on their own
right. More specifically, we show how the method of local maximum satisfying truth assignments can be combined with results for
the occupancy problem in schemes of random allocation of balls into bins in order to achieve an upper bound for the unsatisfiability
threshold less than 4.571. In order to obtain this value, we establish a bound on the q-binomial coefficients (a generalization of the
binomial coefficients). No such bound was previously known, despite the extensive literature on q-binomial coefficients. Finally,
to prove our result we had to establish certain relations among the conditional probabilities of an event in various probabilistic
models for random formulas. It turned out that these relations were considerably harder to prove than the corresponding ones for
unconditional probabilities, which were previously known.
Abstract: The interpolation method, originally developed in statistical physics, transforms distributions of random CSPs to distributions of much simpler problems while bounding the change in a number of associated statistical quantities along the transformation path. After a number of further mathematical developments, it is now known that, in principle, the method can yield rigorous unsatisfiability bounds if one “plugs in an appropriate functional distribution”. A drawback of the method is that identifying appropriate distributions and plugging them in leads to major analytical challenges as the distributions required are, in fact, infinite dimensional objects. We develop a variant of the interpolation method for random CSPs on arbitrary sparse degree distributions which trades accuracy for tractability. In particular, our bounds only require the solution of a 1-dimensional optimization problem (which typically turns out to be very easy) and as such can be used to compute explicit rigorous unsatisfiability bounds.