First-Order Logic

Propositional logic can only represent facts about the world.
First-order logic describes a world which consists of objects and properties (or predicates) of those objects.
Among objects, various relations hold, e.g., Parent(Martin, Zac).
A function is a relation in which there is only one value for a given input.

Examples
Objects: people, houses, numbers, planets,...
Relations: parent, brother-of, greater-than,...
Properties: red, round, prime,...
Functions: father-of, one-more-than

Examples:
"One plus one equals two."
"Squares neighbouring the Wumpus are smelly."
First-order logic is universal in the sense that it can express anything that can be programmed.

Syntax and semantics

First-order logic has sentences just like propositional logic and, in addition, it has terms, which represent objects. Constant symbols, variables and function symbols are used to build terms and quantifiers and predicate symbols are used to build sentences.

<Sentence> := <AtomicSentence>
                          | <Sentence> <Connective> <Sentence>
                          | <Quantifier> <Variable>,... <Sentence>
                          | <Sentence>
                          | (<Sentence>)

<Atomic Sentence> := <Predicate>(<Term>,...)
                         | <Term> = <Term>

<Term> := <Function>(<Term>,...)
                          | <Constant>
                          | <Variable>

<Connective> := | <=> | =>
<Quantifier> := 
<Constant> := Martin | 59302 | Cat | X | ...
<Variable> := a | x | s | ...
<Predicate> := Before | Likes | Raining | Fails | ...
<Function> := Father | Hairof | 304gradefor | ...

BNF for first-order logic

Constant symbols: A, B, C, 1, John,...
 Each constant symbol names exactly one object in the world, not all objects need to have names and some can have more than one name.

Predicate symbols: Round, Brother,...
A predicate symbol refers to a particular relation in the model. For example,  Brother ; since  Brother is a binary relation symbol, the relation it refers to must also be binary, i.e., it must hold or fail to hold between pairs of objects.

Function symbols: Cosine, Fatherof, Leftlegof
A functional relation relates an object to exactly one other object. The last element in the tuple is the value of the function for the other elements. e.g. Officeof(Martin,sc1.40)

Terms
A term is a logical expression that refers to an object. Constant symbols are terms. Terms can also be constructed from function symbols and constant symbols, e.g., Fatherof(John).

The formal semantics of terms is as follows: An interpretation specifies a functional relation referred to by the function symbol and objects referred to by the constant symbols. Hence, a function term refers to the n+1th object in a tuple whose first n elements are those objects refereed to by the arguments of the function.

Atomic sentences
An atomic sentence is formed from a predicate symbol follows by a parenthesized list of terms, for example, Brother(Richard,John) states that the object referred to by Richard is the brother of the object referred to by John.
Atomic sentences can have arguments that are complex terms:
 

Complex sentences
We can use logical connectives to construct more complex sentences. The semantics of these is the same as in propositional logic.

Examples
Brother(Richard,John)  Brother(John,Richard) is true just in case John is the brother of Richard and Richard is the brother of John.
Older(John,30)  Younger(John,30) is true when John is older than 30 or he is younger than 30.

Quantifiers
We now have a logic that allows objects, quantifiers let us express properties of entire collections of objects.
There are two quantifiers in first-order logic: universal and existential.

Universal quantification()
Using this we can say things like, "All cats are mammals."
 

Notice the difference between Universal statements are true if they are true for every individual in the world. They can be thought of as an infinite conjunction.

Existential Quantification
Where universal quantification makes statements about every object, existential quantification makes statements about some objects. To say, for example that Spot has a sister that is a cat, we write
 

x P is true if P is true for some object in the world. It can be thought of as an infinite disjunction.

Nested quantifiers
Very complex statements can be made if one nests quantifiers. Starting simply, without mixing types of quantifiers, we can say things like
 

We can also mix quantifiers, as in "Everybody is good at something"

Connections between  and 
There is an intimate connection between the two quantifiers. To see this, consider the sentence
 

"For all x, x does not like Deceitful Politicians."
Another way to say this is, "There does not exists an x that likes Deceitful Politicians" This is true in general because  is a conjunction over all objects and  is a disjunction over all objects. In fact, all of the following are also true
P <=> x P        PQ<=>(PQ)                             
x P<=>P          (PQ)<=>PQ                             
x P<=>P           PQ<=>(PQ)                             
x P<=>P           PQ<=>(PQ)
Equality
Often the equality symbol is included as a special symbol. This is because the notion of equality is so important in our thinking. With this symbol, we can write things like Father(John)=Bill, whose intended meaning is that the object that is the father of John is the same object as Bill.
Equality can also be thought of as an ordinary binary relation symbol, so the interpretation of = is a set of pairs.
Equality can be used to say that  there are two or more individuals with a particular property "There is an x and y that are sisters of Spot and they are not the same individual."
The equality symbol can also be used to restrict the number of objects that have a certain property, for example, "Every pair of objects with property P are equal." This statement restricts there to being one object with property P.
A short hand that is often used is !x King(x) which means

Using First-Order Logic

In knowledge representation, a domain is a section of the world about which we wish to express some knowledge.
Here we give some simple examples of using FOL to encode domains.

The Kinship domain

Examples
m,c Mother(c)=m <=> Female(m)  Parent(m,c)
w,h Husband(h,w) <=> Male(h)  Spouse(h,w)
x Male(x) <=> Female(x)
g,c Grandparent(g,c) <=> p Parent(g,p) Parent(p,c)
x,y Sibling(x,y) <=> xp Parent(p,x)  Parent(p,y)

Axioms, Definitions and Theorems
Axioms capture the basic facts about a domain, e.g., the examples above are axioms. The axioms are then used to prove theorems.
We can have too few or too many axioms. If we have too few, then we cannot prove theorems that we expect should actually follow in a domain. If we have too many axioms, then some axioms follow from (combinations of) others. An axiom is said to be independent of a set of axioms if it does not follow from that set.
None of the primitive predicates (functions) are defined, instead we give partial specifications of the properties of these.

Asking questions and getting answers

To add sentences to the knowledge base, we call TELL, e.g.,
   TELL(KB,m,c Mother(c)=m <=> Female(m)  Parent(m,c))
This goes both for axioms (as above) and specific facts about a particular situation such as
   TELL(KB,(Female(Maxi)Parent(Maxi,Spot)Parent(Spot,Boots)))
With these facts added, we can
   ASK(KB,Grandparent(Maxi,Boots))
and receive a yes/no answer. We can also ask questions that return additional information in the answers such as
   ASK(KB,x Child(x,Spot)).
Here we do not want simply a yes/no answer. In addition we would like to know a term for x denoting an object in the domain. In general, for a query involving existentially quantified variables, we want to know bindings for those variables. Hence, ASK returns a binding list, e.g., {x/boots}.

Logical Agents for the Wumpus World

We will consider three agent architectures for the Wumpus World: reflex agents that merely classify their percepts and act on that classification, model-based agents that construct an internal representation of the world and use it to act, and goal-based agents that form goals and try to achieve them.

The first step is to define the interface between the agent and the world. The percept sequence must contain both the percepts and the time at which they occurred, otherwise the agent will not be able to tell when things were observed. We will use integers for time steps so a typical percept sentence would be
 

The agent's action must be one of:
Turn(Right), Turn(Left), Forward, Shoot, Grab, Release, Climb
To determine which one is best, we create a query such as a Action(a,5). If we set things up right, such a query will return a binding list such as {a/Grab}.

A Simple Reflex Agent

The simplest type of rule connects actions directly with percepts. These rules resemble reflexes. For example, if the agent sees a glitter, it should perform a Grab: The connection between percepts and actions can be mediated by rules for perception, which turn the immediate perceptual input into more useful forms: Then a connection can be made from these predicates to action choices with separate rules: Limitations of simple reflex agents
Simple reflex agents react to only the current percept sequence. Therefore, they cannot know what things were discovered in previous steps. Hence, they are doomed to a life of wondering aimlessly.

Representing Change in the World

We would like to have rules that refer to previous times. The best way to do this is to maintain an internal model of the world.
The most straightforward way to deal with change is simply to change the knowledge base, erasing and adding sentences as the agent explores its environment. This approach solves some of the problems we saw with the propositional agent. However, it does not allow the agent to keep track of the past.

Situation Calculus
Situation Calculus is the name for a particular way of describing change in FOL. It conceives of the world as a sequence of situations, each of which is a snapshot of the state of the world. Situations are generated from previous situations by actions.
Every relation whose truth can change over time is handled by giving an extra situation argument to the corresponding predicate symbol. By convention, we make the situation argument always the last. Hence, instead of At(Agent,Location), we would have At(Agent,[1,1],S0)  At(Agent,[1,2],S1).
Relations or properties that do not change over time need not have the extra argument, e.g., WallAt([0,1]).
The next step is to represent how the world changes from one situation to the next. One way to do this is to have a function result that maps actions and states to new states. Then we would have:
 


Actions are described by stating their effects. We specify the properties of the situation that results from doing the action. Or These axioms are called effect axioms. Alone they are not sufficient to keep track of whether the agent is holding the gold. We also need to say that if the agent is holding something in a state then it is still holding it in the state that results from an action other than Release: Axioms like these are called frame axioms. The combination of effect and frame axioms provide a complete description of how the world changes in response to an agent's actions.

A more elegant representation combines the effects and frame axioms into a single axiom that describes how to compute the next time step. These axioms have the structure
 

The use of <=> ensures that a predicate will be true if it is made true or stays true and that it will be false in all other cases.

a,x,s Holding(x,Result(a,s))<=>[(a=GrabPresent(x,s)Portable(x))  (Holding(x,s)aRelease)]

This type if axiom is called a successor-state axiom. One such axiom is needed for each predicate that may change its value over time. The axiom must list all the ways in which a predicate can become true and all the ways in which it can become false.

Keeping track of location

In the Wumpus world, the agent cannot directly perceive its location, so it needs to keep track of where it has been. Its also a good idea for it to keep track of what it has seen in previous locations. The agent needs to know its location and orientation:
 

The agent also needs to know how the locations are arranged. Its "map" consists of the values of the function LocationToward, which takes a location and a direction and gives the location that is one step forward in the given direction: From the map it is possible to tell which square is directly ahead of the agent The agent also needs to know whatever is known about the contents of the locations. Assume that the perimeter locations contain walls and the rest do not. The agent needs to know what the actions do to locations: Finally the agent needs to know what the actions do to orientations.
  In addition to keeping track of location and the gold, the agent should also keep track of whether the Wumpus is alive or dead. We also need to describe shoot.

Deducing Hidden Properties of the World

From the perceptual information we obtain in situations, we can infer properties of locations. Neither Breezy nor Smelly need situation arguments because pits and Wumpuses do not move around.

We need to write some rules that relate various aspects of single world states (as opposed to across states). There are two main kinds of such rules:
 

Why do we need both causal and diagnostic rules? It would seem that diagnostic rules are enough. However, it is very tricky to ensure that they derive the strongest possible conclusions from the available information. For example, the absence of stench or breeze implies that adjacent squares are OK: but sometimes a square can be OK even when smells and breezes abound. The model-based rule is probably the best was to represent safety.

The important thing to remember about all this is that if the axioms correctly and completely describe the way the world works and the way percepts are produced, the inference procedure will correctly infer the strongest possible description of the world state given the available percepts.

Preference Among Actions

A problem with the Wumpus world knowledge base that we have built so far is that it is difficult to decide which action is best among a number of possibilities. For example, to decide between a forward and a grab, axioms describing when it is OK to move to a square would have to mention glitter. This is not modular. We can solve this problem by separating facts about actions from facts about goals. This way our agent can be reprogrammed just by asking it to achieve different goals.

The first step is to describe the desirability of actions independent of each other. In doing this we will use a simple scale: actions can be Great, Good, Medium, Risky, or Deadly. Obviously, the agent should always do the best action it can find:
 

We use this action quality scale in the following way. Up to the point where it finds the gold, the basic strategy for our agent will be: Once the gold is found the policy must change. At this point the agent should have the goal of safely reaching square [1,1] as quickly as possible. The presence of an explicit goal allows the agent to work out a sequence of actions that will achieve the goal. There are at least three ways to find such a sequence: