GSoC 2014 Application Soumya Dipta Biswas: Propositional and First Order Logic - sympy/sympy GitHub Wiki

lscpu

Name: Soumya Dipta Biswas
University: Birla Institute of Technology and Science, Pilani
Branch: M.Sc. (Tech.) Information Systems
Email: [email protected]
Github: SDB1323

man self

I have been using python for almost 2 years now and feel quite at home with it. I started using python to create proof of concept for my projects, but gradually switched to it full time. I am also quite comfortable with C, C++, Java, (common) lisp and with PHP, JavaScript and family. I use Ubuntu x.04 (currently 13) and upgrade my OS once a year. I am new to git but have got a good hang of it over the past couple of months while working with SymPy. I switch between sublime, gedit and idle depending upon the type of task at hand.

I have studied logic previously in Logic in Computer Science course. I have also been the teaching assistant for the course last semester and am currently the assistant for Data Structure and Algorithms. I have spent a good amount of time with the concepts of logic, propositional and predicate alike, and find the field genuinely interesting.

git log

All of my contributions have been to sympy.logic. Currently only one of my PRs has been merged, however the others are complete and once those can be benchmarked with the improvements suggested in this proposal, I hope to be able to merge those too.

  • Add Semantic Tableaux for SAT Solving #2964
  • Improve pl_true #2964
  • Add support for Equivalent with multiple arguments #2869
  • Add Tseitin Transformation for overall faster SAT solving #2852
  • Fix the wrong result given by the Propositional Knowledge Base #2830
  • Add function to check validity of formulas #2830

from __future__ import plan

Abstract

This proposal deals with extending the existing module for propositional logic and creating a new module for First Order Logic (FoL). The extension to the propositional module is in the form of faster and more memory efficient algorithms and making the module ready for use by the assumption system. Currently, there exists no support for FoL in SymPy. My idea is to lay down a basic architecture for the same and implement all its essential algorithms and move towards an inference system.

Implementation

PROPOSITIONAL LOGIC

Remove Automatic Evaluation

Based on Issue 6434 Xor and ITE will return objects of respective type instead of automatically evaluating to And/Or. Additionally, Not will no longer automatically apply De Morgans Law. However Nand and Nor will still evaluate to Not(And(...)) and Not(Or(...)) respectively.

Negation Normal Form

Checks whether the formula is in NNF is_nnf(expr) or converts formula to NNF to_nnf(expr). Conversion to NNF will remove all forms of implications, Xor, ITE and push the negation down to the literals. Currently, eliminate_implications is called by almost every function (which will now internally call to_nnf) and hence no extra time is wasted or compatibility issues will be raised for conversion of the Xor, ITE, Not to And/Or and hence the automatic evaluation can be safely omitted.

Logical Equivalence

is_equivalent(expr1, expr2, … ) returns True if the given expressions are logically equivalent. This is different from Equivalent as it returns the results instead of wrapping the arguments with the Equivalent operator. Additionally overload == to return is_equivalent as, a formula in any of its equivalent forms, is still the same formula. To return the exact match i.e. value as well as form is can be used.

>>> Xor(a, b)
Xor(a, b)
>>> to_nnf(Xor(a, b))
Or(And(a, Not(b)), And(Not(a), b))
>>> Xor(a, b) == Or(And(a, Not(b)), And(Not(a), b))
True
>>> Xor(a, b) is Or(And(a, Not(b)), And(Not(a), b))
False

Conversion to Normal Forms (cnf, dnf)

Currently the conversion to normal forms is both inefficient and produces redundant terms (Issue 7175). While simplify parameter alleviates the problem of redundancy, it makes the conversion slower. This version of to_cnf/to_dnf uses an iterative manipulation of sets of literals (inspired from Semantic Tableaux) to eliminate function overhead (both time and space) of recursion and eliminate redundant clauses like a | ~a

>>> to_cnf((a & b) | (~a & ~b), simplify=False)
And(Or(Not(a), b), Or(Not(b), a))

Small Functions

  • pretty_print(expr) formula prints the formula using ~, &, |, >> instead of the usual function notation.
  • generate_formula(num_var, num_clauses, operators=[Not, And, Or, Implies, Equivalent]) generates random propositional expressions especially for use in testing.
  • entails(formula, formula_set={}) checks whether the given formula set entails a particular formula. When the formula set is empty, the function returns True if the formula is valid.

SAT Solver

While the solver could have been my entire field of concentration, I do not wish to create any solver from the scratch. Instead I plan to build upon dpll2 which is already present in SymPy and utilize the rest of the time to create the FoL module. Firstly, dpll2 already supports VSIDS heuristics and Watch literals. While clause learning is also implemented, it is not used by default.

All SAT:
The All SAT finds all the models for a given formula. The macro solution to the problem is to write a function which calls the SAT solver as many number of times as the number of models. However this procedure completely ignores the benefit gained from clause learning, as it restarts the SAT procedure. The micro translation of the same procedure is the blocking clause method. Whenever, the SAT solver finds a solution, it adds the solution to the list of solutions, but treats the incident as a conflict and adds the negation of the decision literals to the set of clauses. This results in the solver returning all models for the formula. Even after optimizations, there exist better algorithms for finding all SAT and this paper describes a memory efficient SAT procedure to find all solutions (though I am still working on understanding its implementation).

Minimal Unsatisfiable Subset:
The minimal unsatisfiable subset is a set of clauses which become satisfiable if any of the clauses from it is removed. This set can provide insight into why the clause is unsatisfiable i.e. what combination of clauses are causing the mutual conflict thereby making the entire formula inconsistent. The popular algorithms for finding this minimal unsatisfiable core are either resolution based or clause selector variable based. The former seems to work better for single MUC while the latter for all minimal subsets. The iterative SAT solver optimization is already going to be implemented as a part of the All SAT procedure and can be reused here. This is a rather recent addition and I'd have to do a comprehensive study of the methods during the community bonding period to understand what methods (and their subsequent optimizations) will best suit the needs.

SAT Benchmarks:
The most important aspect of the solver is speed (after correctness ofcourse). Benchmarking the solver with a mix and match of various features is critical. There are many aspects which govern the speed of the solver. While Tseitin dramatically reduces the cnf encoding time, it increases the number of variables and clauses which slows down SAT. Tuning the variables associated with SAT (VSIDS decay, clause management policy, etc.) is going to be necessary for optimizing the performance. Additionally, testing whether the Semantic Tableaux still performs better than the new SAT solver is also going to be important.

FIRST ORDER LOGIC:

The major idea behind this is to create a basic architecture of First Order Logic, which can be used for basic inference and later move towards an entire inference engine.

Terms, Predicates and Quantifiers

This part of the project would involve creating the basic structure on which the module will work on. The constants and variables will simply be a specialization of the Symbol class. The functions and predicates will be a bit more sophisticated. Firstly, calling any function or predicate should return the arguments wrapped with a self type object. Secondly, the arity of the function/ predicate is going to be of consequence. Thirdly both functions and predicates would need to keep a track of its arguments so as to form a well formed formula. The quantifiers are very similar to the operators of propositional logic, and will keep a track of all the variables quantified and ofcourse the formula itself. A rough class hierarchy would be something like this:

  • FoL(Basic) - Base class for all FoL objects which sets is_FoL.
  • Callable() - Base class for functions, predicates and quantifiers which wrap the arguments when called (needs a better name)
  • Predicate(FoL, Callable) - Predicate class
  • Term(FoL) - Base class for all terms (constants, variables, functions). This is important as both predicates and functions can only contain n-ary terms
  • Constant(Term, Symbol) - Term class
  • Variable(Term, Symbol) - Variable class
  • Function(Term, Callable) - Function class
  • Quantifier(FoL, Callable) - Quantifier class
# Constants
>>> socrates = constant('socrates')

# Variables
>>> X, Y = variable('X Y')

# Functions
>>> father = function('father')

# Predicates
>>> F, G = predicate('F G')
>>> KNOWS = predicate('KNOWS', arity=2)
>>> KNOWS(father(socrates), socrates)
KNOWS(father(socrates), socrates)

# Quantifiers
# Universal Quantifier (ForAll)
>>> MAN, MORTAL = predicate('MAN MORTAL')
>>> ForAll(X, MAN(X) >> MORTAL(X))
ForAll(X, Implies(MAN(X),  MORTAL(X)))

# Existential Quantifier (Exists)
>>> Q, R = predicate('Q R', arity=2)
>>> ForAll(X, Q(X, Y) >> Exists(Y, R(X, Y)))
ForAll(X, Implies(Q(X, Y), Exists(Y, R(X, Y))))

Numbered Terms:
Except for the general declaration of terms, there needs to be a special numbered_term as there exist lots of scenarios where terms will have to be created at runtime

  • Standardization requires numbered_variable
  • Skolemization requires numbered_constant (skolem constant) and numbered_function (skolem function)

Automatic Evaluation

While most of the automatic evaluation is quite unnecessary, some amount of it helps in keeping formulas in a consistent form. This evaluation is targeted at making the same formula more compact and is in the same spirit as And(And(a, b), And(c, d)) evaluating to And(a, b, c, d).

>>> ForAll(X, F(X)) & A(X, G(X))
ForAll(X, And(F(X), G(X)))
>>> Exists(X, F(X)) | E(X, G(X))
Exists(X, Or(F(X), G(X)))
>>> ForAll(X, ForAll(Y, F(X) & G(X)))
ForAll({X, Y}, And(F(X), G(X)))
>>> Exists(X, Exists(Y, F(X) & G(X)))
Exists({X, Y}, And(F(X), G(X)))

The last two examples require some discussion. Even though the syntax of the evaluated expression looks quite unnatural, it is both short and conveys the fact that the order of the variables is not important. Even the constructor of the Quantifiers will have to allow input in both the forms i.e. Exists(X, expr) and Exists({X, Y}, expr) or a common Exists({X}, expr). However these are details I'd rather tackle in the implementation phase and go for the form most entailed by the architecture.

Ground, Closed, Free, Bound

is_ground(expr) returns whether the given formula is ground (does not contain variables).
is_closed(expr) returns whether the given formula is closed (does not contain free variables).
is_free(var, expr) returns whether atleast one instance of given variable in the formula is free (variable is not in scope of quantifier).
is_bound(var, expr) returns whether atleast one instance of given variable in the formula is bound (variable is in the scope of quantifier).

>>> is_ground(ForAll(X, MAN(X) >> MORTAL(X)))
False
>>> is_closed(ForAll(X, MAN(X) >> MORTAL(X)))
True
>>> is_closed(ForAll(X, KNOWS(X, Y)))
False
>>> is_free(X, ForAll(X, Q(X, Y) >> Exists(Y, R(X, Y))))
False
>>> is_free(Y, ForAll(X, Q(X, Y) >> Exists(Y, R(X, Y))))
True
>>> is_bound(Y, ForAll(X, Q(X, Y) >> Exists(Y, R(X, Y))))
True

Interpretation

Returns the interpretation of the formula under the given assignment. To maintain consistency with pl_true, the function is fol_true(expr, domain, constant_map, function_map, predicate_map)

>>> expr = ForAll(X, F(X) >> Q(f(x), a))
>>> domain = {1, 2}
>>> constant_map = {a: 1}
>>> function_map = {f: {(1): 2, (2): 1}}
>>> predicate_map = {F: {(1): False, (2): True}, Q: {(1,1): True, (1,2): True, (2,1): False, (2,2):True}}
>>> fol_true(domain, constant_map, function_map, predicate_map)
True

The above method, while necessary to include, is not a very user friendly method for interpretation/ model checking. A better idea is to do something like the following along with the above.

>>> def even(x):
        return x % 2 == 0
>>> def not_greater(x, y)
        return x <= y
>>> fol_true(domain, constant_map, function_map, {F: even, Q: not_greater})
True

The same approach can be used for function_map. The given function computes all the values based on the domain and uses those values to determine the value of the formula under the given interpretation.

Standardization

standardize(expr) rename all variables so that each quantifier has its own unique variable name. This function serves more as a helper function that will be called by all the functions involving conversion to normal form (PNF, SNF, CNF/DNF).

Prenex Normal Form

is_pnf(expr) checks if the given formula is in Prenex Normal Form.
to_pnf(expr) converts the given formula into an equivalent formula in PNF. The aim of the conversion is to bring all the quantifiers to the beginning of the expression using repeated laws of equivalences. Hence the formula is divided into two parts, quantifiers and matrix (quantifier-less expression). Most of the algorithms associated with FoL require the formula to be in PNF.

>>> is_pnf(ForAll(X, Exists(Y, F(X) >> G(Y))))
True
>>> is_pnf(ForAll(X, F(X)) >> Exists(Y, G(Y)))
False
>>> to_pnf(ForAll(X, F(X)) >> Exists(X, G(X)))
Exists(X, Or(Not(P(X)), Q(X)))

Skolem Normal Form

is_snf(expr) checks if the given formula is in Skolem Normal Form.
to_snf(expr) converts the given formula into an equisatisfiable formula in SNF. The aim of the conversion is to remove all the existential quantifiers and replace existentially quantified variables with a skolem term.

  • If some existential quantifier is present outside the scope of any universal quantifier, then all the variables associated with the quantifier are replaced by a new skolem constant.
  • If the existential quantifier occurs in the scope of some universal quantifier(s), then each of those variables is replaced by a skolem function consisting of the variables associated with the universal quantifiers before it.
  • Convert the formula into PNF

Conjunctive/Disjunctive Normal Form

Returns the Conjunctive/Disjunctive Normal Form of the given formula. The formula is converted to Skolem Normal Form followed by dropping of the universal quantifiers and distributive law is repeatedly applied. This normal form is not strictly equivalent to the original formula, but is equisatisfiable.

>>> U, Z = variable('U Z')
>>> S = predicate('S', arity=3)
>>> expr = Exists(X, ForAll(Y, ForAll(Z, Exists(U, ~R(X, Z) & ~R(Y, Z) & S(X, Y, U)))))
>>> is_snf(expr)
False
>>> to_snf(expr)
ForAll(Y, A(Z, And(Not(R(_c0, Z)), Not(R(Y, Z)), S(_c0, Y, _f0(X, Y)))))
# _c0 is a skolem constant and _f0 is a skolem function
>>> to_cnf(expr)
And(Not(R(_c0, Z)), Not(R(Y, Z)), S(_c0, Y, _f0(X, Y)))

Unification

mgu(expr) returns the Most General Unifier (MGU) of a given set of clauses using disagreement set algorithm. Returns False if unification is not possible. Analogous to propositional bool_equal, this function finds the mapping that converts a given formula into a another formula. At every step of resolution the mgu is used to check whether two predicates can be unified.

Substitution:
Substitutes a given variable with a specified term in a given formula. This is the FoL version of propositional bool_map.

>>> a = constant('a')
>>> g, h = function('g h')
>>> mgu(R(f(a), g(X)), R(Y, Y))
False
>>> formula_set = {S(a, X, h(g(Z))), S(Z, h(Y), h(Y))}
>>> mgu(formula_set)
{X: h(Y), Y: g(a), Z: a}
>>> substitute(formula_set[0], mgu(formula_set))
S(a, h(g(a)), h(g(a)))

Resolution

Uses resolution to determine whether the given set of formulas are satisfiable or not. It is similar to the resolution procedure used in propositional logic.

  • First two clauses containing the same predicate but with opposite signs are found.
  • Next, a unification of those two clauses is performed. If unification fails, then the procedure is restarted with a different set of predicates.
  • If the predicates can be unified, then clauses are replaced by a union of both the clauses with the clashing pair of predicates removed.
>>> H, I = predicate('H I')
>>> formula_set = {~F(X) | G(X), ~F(X) | H(X), F(a), I(a), ~I(X) | ~H(X)}
>>> resolution(formula_set)
False

Entailment

Returns whether a formula is entailed by (logical consequence of) a set of formulas entails(formula, formula_set={}). When the formula set is empty it returns the validity of the given formula. The entailment is found by negating the formula and adding it to the formula set followed by resolution. If resolution returns False then (and only then) the formula is entailed by the given set of formulas.

>>> formula_set = {ForAll(X, F(X) >> (G(X) & H(X))), Exists(X, F(X) & I(X))}
>>> formula = Exists(X, H(X) & I(X))
>>> entails(formula, formula_set)
True

Step by step display

Having a module which can demonstrate the step-by-step results is of great importance in an academic environment (for demonstration purposes). Conversion to clausal forms, unification, resolution (wherever stepwise evaluation is possible) there is an option stepwise set to False by default. This idea of implementing this is rather a vague one at the moment but it will go something like this (with pretty print).

>>> expr = Exists(y, g(y) & ForAll(z, r(z) >> f(y, z)))
>>> to_cnf(expr, stepwise=True)
Exists(y, g(y) & ForAll(z, ~r(z) | f(y, z)))
g(_c0) & ForAll(z, ~r(z) | f(_c0, z))
g(_c0) & (~r(z) | f(_c0, z))

Prototype

Here is the solution to a classic FoL problem modeled using the FoL module. The same solution could have been done in 1 step using entails(f3, {f1, f2})

Question: Some patients like all doctors. No patient likes any quack. Therefore no doctor is a quack.

>>> x,y = variable('x y')
>>> patient, doctor, quack = function('patient doctor quack')
>>> likes = predicate('likes', 2)
>>> f1 = Exists(x, patient(X) & ForAll(y, doctor(y) >> likes(x, y)))
>>> f2 = ForAll(x, patient(x) >> ForAll(y, quack(y) >> ~likes(x, y)))
>>> f3 = ForAll(x, doctor(x) >> ~quack(x))

# Using resolution
>>> clauses = set(to_cnf(f1 & f2 & ~f3).args)
>>> clauses
{patient(_c0), Or(Not(doctor(y)), likes(_c0, y)), Or(Not(patient(_v0)), Not(quack(_v1)), Not(likes(_v0, _v1))), doctor(_c1), quack(_c1)}
>>> resolution(clauses)
False
# False indicates that the set of clauses is unsatisfiable, hence the formula is entailed.

Semantic Tableaux

Find the satisfiability of a given formula using the Semantic Tableaux.

>>> semantic_tableaux(ForAll(X, R(X) & ~(Q(X) >> R(X))))
False
>>> semantic_tableaux(ForAll(X, F(X)) >> Exists(X, G(X)))
True

Knowledge Base and Inference Engine

Create a Knowledge Base capable of adding facts and clauses and querying those efficiently. The final idea is to create an inference engine for FoL which supports prolog style reasoning.

>>> KB = FoL_KB()
>>> m, f, c1, c2 = constant('m f c1 c2')
>>> X, Y, Z = variable('X Y Z')
>>> parent, mother, father, sibling = predicate('parent mother father sibling', 2, KB=KB)
>>> mother(m, c1)
>>> father(f, c1)
>>> father(f, c2)
>>> parent(Z, X) & parent(Z, Y) >> sibling(X, Y)
>>> father(X, Y) | mother(X, Y) >> parent(X, Y)
>>> query(sibling(c1, c2))
True

To implicitly add facts and clauses to the KB, I plan on adding KB as a part of the predicate and modifying _call_() accordingly. The current assumption system uses an inference engine for propositional logic with forward chaining and RETE. Will study the existing system and if possible modify and extend the same to support FoL (RETE with Quantifiers). I am yet to work out the exact implementation details for this functionality, but will definitely do so during the community bonding period.

Timeline

Community bonding Period:

As the name suggests, the period is for getting to know the community. However, in the process of working with SymPy for about 3 months now, I already feel quite at home with the community. Hence I will utilize this time in discussing the points in my proposal with my mentor so as to fill the gaps and chalk down the exact implementation details of all the modules to be added. A few specific tasks that I would additionally do:

  • Extensively study the algorithms associated with the SAT solver.
  • Find out the parts in the propositional module (and core) that can be reused (with minor modification) for FoL.
  • List the modifications to be made to the core because of addition of the FoL module and find out if that would break compatibility.
  • Study the inference engine present in assumption system and work out the exact requirements and implementation details for the inference engine.

I have divided the entire period into 5 parts of 2 - 3 - 3 - 4 - 1 weeks respectively. The timeline has been organized such that the tasks listed on 1 line would typically take about a week. I will be sending in a pull request once all the tasks on the particular line are over. Thorough testing and documentation at every point is implicitly implied.

WEEK 1-2:

Start the program off with extensions to the propositional logic module.

  • Remove automatic evaluation, NNF, Equivalence
  • Convert to CNF/DNF, Small functions

WEEK 3-5:

Implement the basic class hierarchy of FoL. This will probably take more time than anticipated because of the integration with SymPy core and assumption system. Additionally this will have to be built on top of the propositional module and all the existing functions will have to be extended to support FoL syntax and rules. The points in this part are not according to the week schedule and PRs will have to be sent once a minimum viable product is obtained.

  • Terms
  • Predicates
  • Quantifiers
  • Numbered Terms
  • Ground, Closed, Free, Bound
  • Interpretation

WEEK 6-8:

Once the base for FoL is complete, focus will be on implementing associated algorithms. All the functions below build towards deriving logical inferences.

  • Prenex Normal Form
  • Skolem Normal Form, CNF/DNF
  • Unification, Substitution
  • Resolution, Entailment

WEEK 9-12:

The last part is completely dedicated to the SAT solver. I plan to finish all the previous tasks as soon as possible to give this period the maximum amount of time (though that still might not be enough). Following is a priority order of the algorithms to be implemented. There will be a continual cycle of testing and benchmarking as soon as the implementation of an item is finished.

  • Clause Learning (1-UIP)
  • EVSIDS heuristics
  • All SAT
  • Search with restarts
  • Learned clause management
  • Minimal Unsatisfiable Subsets

WEEK 13:

The last week would focus on finishing left over work in the form of testing, bug fixing and documentation and getting the PRs merged to the master.

Future Plans:

In the event that the tasks are finished before schedule, I plan to work on more advanced algorithms for FoL. After the program is over, I plan to continue working with SymPy to create the thing that everything has been building up to, Inference Engine.

  • Semantic Tableaux
  • Knowledge Base, Inference Engine
⚠️ **GitHub.com Fallback** ⚠️