November 17

* Resolution

    a v ~b, b  =>   c

    p1 v p2 v ... v pm,
    ~p1 v pn v ... v pq,
    p2 v ... v pm v pn v ... v pq

* Proof by refutation using resolution

    To prove a statement using resolution:
       1) Convert each statement in database to clausal form
          Clausal form is a disjunction of literals
       2) Negate the goal statement and convert to clausal form
       3) Repeatedly apply resolution to generate new statements
       4) If generate empty statement [], proof is complete

    Clausal Form is a disjunction of literals
      (no quantifiers, no &, no ->, no <->, no parentheses)

    Proof by refutation:
       We know database statements are all true
       Resolution is sound
          All statements generated are true if database is true
       How can we come up with empty (false) clause?
       Only possible false statement is negated goal
       If negated goal is false, goal must be true (A v -A is a tautology)

       If Linus is sitting in the pumpkin patch, it must be Halloween.
       The Great Pumpkin appears on Halloween.
       Linus is sitting in the pumpkin patch.
       Prove the Great Pumpkin will appear today.

       1.  SitPatch(Linus) -> Halloween(Today)
       2.  Halloween(Today) -> GrPumpkin(Today)
       3.  SitPatch(Linus)
       1.  ~SitPatch(Linus) v Halloween(Today)
       2.  ~Halloween(Today) v GrPumpkin(Today)
       3.  SitPatch(Linus)
       4.  ~GrPumpkin(Today)     ; Negated goal
       5. ~Halloween(Today)  {2&4}
       6. ~SitPatch(Linus)   {1&5}
       7. []                 {3&6}

       If the maid stole the jewelry, then the butler wasn't guilty.
       Either the maid stole the jewelry or she milked the cows.
       If the maid milked the cows, then the butler got his cream.
       Goal:  Therefore, if the butler was guilty, then he got his cream.

       G(M) -> ~G(B)
       G(M) v Cows(M)
       ~Cows(M) v Cream(B)
       ~(G(B) -> Cream(B))
       1. ~G(M) v ~G(B)
       2. G(M) v Cows(M)
       3. ~Cows(M) v Cream(B)
       4. G(B)
       5. ~Cream(B)
       6. ~G(M) {1&4}
       7. Cows(M) {2&6}
       8. Cream(B) {3&7}
       9. [] {5&8}

* Complexity Theory

   Until now we put all problems into two categories:  those that can be solved
   by algorithms and those that cannot.

   Now we realize that some are solvable in principle but not solvable in a
   practical sense by computers due to excessive time requirements.

   Show computing example.

   Suppose your task is to schedule visits of a traveling sales representative
   to 10 offices.  You are given a map with the 10 cities and asked to
   product itinerary that minimizes total distance traversed.

   If there are n cities to visit, the number of possible itineraries to visit
   is certainly finite (n-1)!.  One algorithm can examine all itineraries and
   compare them to find the shortest.  A TM can be designed to do this.

   There are too many tours to be examined in a reasonable amount of time.
   For n=10, we have to examine 9! = 362,880 itineraries.
   For n=40, we have to examine over 10^45 itineraries.  Even if examine 10^15
      tour per second (much more than capable of with today's computers), the
      required time would be several billion lifetimes of the universe!

   A "practical feasible algorithm" is one that requires only a number of steps
   that is a "polynomial" in the size of the input.
   Which type of TM do we use?  The choice does not matter.
   If one can solve in P time, all can solve in P time (conversions only take
      P time).

   A TM M is "polynomially bounded" if for an input x, M always halts after at
   most p(n) steps, where n is the size of hte input and p is a polynomial

   A language is "polynomially decidable" if there is a polynomially bounded TM
   that decides it.  The class of all polynomially decidable languages is "P".