Variable Forgetting in Reasoning about Knowledge

In this paper, we investigate knowledge reasoning within a simple framework called knowledge structure. We use variable forgetting as a basic operation for one agent to reason about its own or other agents\ knowledge. In our framework, two notions namely agents\ observable variables and the weakest sufficient condition play important roles in knowledge reasoning. Given a background knowledge base and a set of observable variables for each agent, we show that the notion of an agent knowing a formula can be defined as a weakest sufficient condition of the formula under background knowledge base. Moreover, we show how to capture the notion of common knowledge by using a generalized notion of weakest sufficient condition. Also, we show that public announcement operator can be conveniently dealt with via our notion of knowledge structure. Further, we explore the computational complexity of the problem whether an epistemic formula is realized in a knowledge structure. In the general case, this problem is PSPACE-hard; however, for some interesting subcases, it can be reduced to co-NP. Finally, we discuss possible applications of our framework in some interesting domains such as the automated analysis of the well-known muddy children puzzle and the verification of the revised Needham-Schroeder protocol. We believe that there are many scenarios where the natural presentation of the available information about knowledge is under the form of a knowledge structure. What makes it valuable compared with the corresponding multi-agent S5 Kripke structure is that it can be much more succinct.


Introduction
Epistemic logics, or logics of knowledge are usually recognized as having originated in the work of Jaakko Hintikka -a philosopher who showed how certain modal logics could be used to capture intuitions about the nature of knowledge in the early 1960s (Hintikka, 1962).In the mid of 1980s, Halpern and his colleagues discovered that S5 epistemic logics could be given a natural interpretation in terms of the states of processes (commonly called agents) in a distributed system.This model now is known as the interpreted system model (Fagin, Halpern, Moses, & Vardi, 1995).It was found that this model plays an important role in the theory of distributed systems and has been applied successfully in reasoning about communication protocols (Halpern & Zuck, 1992).However, the work on epistemic logic has mainly focused on theoretical issues such as variants of modal logic, completeness, computational complexity, and derived notions like distributed knowledge and common knowledge.
In this paper, we explore knowledge reasoning within a more concrete model of knowledge.Our framework of reasoning about knowledge is simple and powerful enough to analyze realistic protocols such as some widely used security protocols.
To illustrate the problem investigated in this paper, let us consider the communication scenario that Alice sends Bob a message and Bob sends Alice an acknowledgement when receiving the message.We assume Alice and Bob commonly have the following background knowledge base Γ CS : Bob recv msg ⇒ Alice send msg Bob send ack ⇒ Bob recv msg Alice recv ack ⇒ Bob send ack where Bob recv msg and Bob send ack are observable variables to Bob, while Alice send msg and Alice recv ack are observable to Alice.
The problem we are concerned with is how to verify that Alice or Bob knows a statement ϕ.Intuitively, we should be able to prove that for a statement observable to Alice (Bob), Alice (Bob) knows the statement if and only if the statement itself holds.As for the knowledge of non-observable statements, the following should hold: 1. Alice knows Bob recv msg if Alice recv ack holds; on the other hand, if Alice knows Bob recv msg, then Alice recv ack holds, which means that, in the context of this example, the only way that Alice gets to know Bob recv msg is that Alice receives the acknowledgement from Bob.
2. Bob knows Alice send msg if Bob recv msg holds; moreover, if Bob knows Alice send msg, then Bob recv msg holds.The latter indicates that the only way that Bob gets to know Alice send msg is that Bob receives the message from Alice.
3. Finally, Bob does not know Alice recv ack.
The idea behind the presented knowledge model for those scenarios demonstrated above is that an agent's knowledge is just the agent's observations or logical consequences of the agent's observations under the background knowledge base.One of the key notions introduced in this paper is agents' observable variables.This notion shares a similar spirit of those of local variables in the work of van der Hoek and Wooldridge (2002) and local propositions in the work of Engelhardt, van der Meyden and Moses (1998) and in the work of Engelhardt, van der Meyden and Su (2003).Informally speaking, local propositions are those depending only upon an agent's local information; and an agent can always determine whether a given local proposition is true.Local variables are those primitive propositions that are local.Nevertheless, the notion of local propositions (Engelhardt et al., 1998(Engelhardt et al., , 2003) ) is a semantics property of the truth assignment function in a Kripke structure, while the notion of local variables (van der Hoek & Wooldridge, 2002) is a property of syntactical variables.In this paper, we prefer to use the term "observable variable" in order to avoid any confusion with the term "local variable" used in programming, where "non-local variables" such as "global variables" may often be observable.
Our knowledge model is also closely related to the notion of weakest sufficient condition, which was first formalized by Lin (2001).Given a background knowledge base Γ and a set of observable variables O i for each agent i, we show that the notion of agent i knowing a formula ϕ can be defined as the weakest sufficient condition of ϕ over O i under Γ, which can be computed via the operation of variable forgetting (Lin & Reiter, 1994) or eliminations of middle terms (Boole, 1854).Moreover, we generalize the notion of weakest sufficient condition and capture the notion of common knowledge.Now we briefly discuss the role of variable forgetting in our knowledge model.Let us examine the scenario described above again.Consider the question: how can Alice figure out Bob's knowledge when she receives the acknowledgement from Bob?Note that Alice's knowledge is the conjunction of the background knowledge base Γ CS and her observations Alice recv ack etc.Moreover, all Alice knows about Bob's knowledge is the conjunction of the background knowledge base Γ CS and all she knows about Bob's observations.Thus, Alice gets Bob's knowledge by computing all she knows about Bob's observations.In our setting, Alice gets her knowledge on Bob's observations simply by forgetting Bob's nonobservable variables in her own knowledge.
There is a recent trend of extending epistemic logics with dynamic operators so that the evolution of knowledge can be expressed (van Benthem, 2001;van Ditmarsch, van der Hoek, & Kooi, 2005a).The most basic extension is public announcement logic (PAL), which is obtained by adding an operator for truthful public announcements (Plaza, 1989;Baltag, Moss, & Solecki, 1998;van Ditmarsch, van der Hoek, & Kooi, 2005b).We show that public announcement operator can be conveniently dealt with via our notion of knowledge structure.This makes the notion of knowledge structure genuinely useful for those applications like the automated analysis of the well-known muddy children puzzle.
From the discussion above, we can see that our framework of reasoning about knowledge is appropriate in those situations where every agent has a specified set of observable variables.To further show the significance of our framework, we investigate some of its interesting applications to the automated analysis of the well-known muddy children puzzle and the verification of the revised Needham-Schroeder protocol (Lowe, 1996).
We believe that there are many scenarios where the natural presentation of the available information about knowledge is under the form of a knowledge structure.What makes it valuable compared with the corresponding multi-agent S5 Kripke structure is that it can be much more succinct.Of course, the price to pay is that determining whether a formula holds in a knowledge structure is PSPACE-hard in the general case, while it is in PTIME when the corresponding S5 Kripke structure is taken as input.However, the achieved trade-off between time and space can prove computationally valuable.In particular, the validity problem from a knowledge structure can be addressed for some instances for which generating the corresponding Kripke structure would be unfeasible.The muddy children puzzle shows this point clearly: generating the corresponding Kripke structure is impossible from a practical point of view, even for the least number of children considered in the experiments.
The organization of this paper is as follows.In the next section, we briefly introduce the concept of forgetting and the notion of weakest sufficient and strongest necessary conditions.In Section 3, we define our framework of reasoning about knowledge via variable forgetting.In Section 4, we generalize the notion of weakest sufficient condition and strongest necessary condition to capture common knowledge within our framework.In Section 5, we show that public announcement operator can also be conveniently dealt with via our notion of knowledge structure.Section 6 discusses the computational complexity issue about the problem of whether an epistemic formula is realized in a knowledge structure.In the general case, this problem is PSPACE-hard; however, for some interesting subcases, it can be reduced to co-NP.In Section 7, we consider a case study by applying our framework to model the well known muddy children puzzle; and further more to security protocol verification in Section 8. Finally, we discuss some related work and conclude the paper with some remarks.

Preliminaries
In this section, we provide some preliminaries about the notions of variable forgetting and weakest sufficient condition, and epistemic logic.

Forgetting
Given a set of propositional variables P , we identify a truth assignment over P with a subset of P .We say a formula ϕ is a formula over P if each propositional variable occurring in ϕ is in P .For convenience, we define true as an abbreviation for a fixed valid propositional formula, say p ∨ ¬p, where p is primitive proposition in P .We abbreviate ¬true by false.
We also use |= to denote the usual satisfaction relation between a truth assignment and a formula.Moreover, for a set of formulas Γ and a formula ϕ, we use Γ |= ϕ to denote that for every assignment σ, if σ |= α for all α ∈ Γ, then σ |= ϕ.
Given a propositional formula ϕ, and a propositional variable p, we denote by ϕ( p true ) the result of replacing every p in ϕ by true.We define ϕ( p false ) similarly.The notion of variable forgetting (Lin & Reiter, 1994), or eliminations of middle terms (Boole, 1854), can be defined as follows: Definition 1 Let ϕ be a formula over P , and V ⊆ P .The forgetting of V in ϕ , denoted as ∃V ϕ, is a quantified formula over P , defined inductively as follows: For convenience, we use ∀V ϕ to denote ¬∃V (¬ϕ).
Definition 3 Let ϕ be a propositional formula, and V a set of propositional variables.We say ϕ is independent from V if and only if ϕ is logically equivalent to a formula in which none of the variables in V appears.
The following proposition was given in the work of Lang, Liberatore and Marquis (2003).
Proposition 4 Let ϕ be a propositional formula, and V a set of propositional variables.Then ∃V ϕ is the logically strongest consequence of ϕ that is independent from V (up to logical equivalence).

Weakest Sufficient Conditions
The formal definitions of weakest sufficient conditions and strongest necessary conditions were first formalized via the notion of variable forgetting by Lin (2001), which in turn play an essential role in our approach.
Definition 5 Let V be a set of propositional variables and V ⊆ V .Given a set of formulas Γ over V as a background knowledge base and a formula α over V .
It is called a weakest sufficient condition of α over V under Γ if it is a sufficient condition of α over V under Γ, and for any sufficient condition ϕ of α on V under Γ, we have Γ |= ϕ ⇒ ϕ.
It is called a strongest necessary condition of α over V under Γ if it is a necessary condition of α over V under Γ, and for any necessary condition ϕ of α over V under Γ, we have Γ |= ϕ ⇒ ϕ .
The notions given above are closely related to theory of abduction.Given an observation, there may be more than one abduction conclusion that we can draw.It should be useful to find the weakest one of such conclusions, i.e., the weakest sufficient condition of the observation (Lin, 2001).The notions of strongest necessary and weakest sufficient conditions of a proposition also have many potential applications in other areas such as reasoning about actions.The following proposition, which is due to Lin (2001), shows how to compute the two conditions.
Proposition 6 Given a background knowledge base {θ} over V , a formula α over V , and a subset V of V .Let SN C α and W SC α be a strongest necessary condition and a weakest sufficient condition of α over V under {θ} respectively.Then

Epistemic Logic and Kripke Structure
We now recall some standard concepts and notations related to the modal logics for multiagents' knowledge.
Given a set V of propositional variables.Let L(V ) be the set of all propositional formulas on V .The language of epistemic logic, denoted by L n (V ), is L(V ) augmented with modal operator K i for each agent i.K i φ can be read "agent i knows φ ".Let L C n (V ) be the language of L n (V ) augmented with modal operator C ∆ for each set of agents ∆.A formula C ∆ α indicates that it is common knowledge among agents in ∆ that α holds.We omit the argument V and write L n and L C n , if it is clear from context.According to the paper by Halpern and Moses (1992), semantics of these formulas can be given by means of Kripke structure (Kripke, 1963), which formalizes the intuition behind possible worlds.A Kripke structure is a tuple (W, π, where W is a set of worlds, π associates with each world a truth assignment to the propositional variables, so that π(w)(p) ∈ {true, false} for each world w and propositional variable p, and K 1 , • • • , K n are binary accessibility relations.By convention, W M , K M i and π M are used to refer to the set W of possible worlds, the K i relation and the π function in the Kripke structure M , respectively.We omit the superscript M if it is clear from context.Finally, let C ∆ be the transitive closure of i∈∆ K i .
A situation is a pair (M, w) consisting of a Kripke structure and a world w in M .By using situations, we can inductively give semantics to formulas as follows: for primitive propositions p, Conjunctions and negations are dealt with in the standard way.Finally, (M, w) |= K i α iff for all w ∈ W such that wK M i w , we have that (M, w ) |= α; and (M, w) |= C ∆ α iff for all w ∈ W such that wC M ∆ w , we have that (M, w ) |= α.We say a formula α is satisfiable in Kripke structure M if (M, w) |= α for some possible world w in Kripke structure M .
A Kripke structure M is called an S5 n Kripke structure if, for every i, K M i is an equivalence relation.A Kripke structure M is called a finite Kripke structure if the set of possible worlds is finite.According to the work of Halpern and Moses (1992), we have the following lemma.
Lemma 7 If a formula is satisfiable in an S5 n Kripke structure, then so is in a finite S5 n Kripke structure.

Knowledge and Weakest Sufficient Conditions
In our framework, a knowledge structure is a simple model of reasoning about knowledge.The advantage of this model is, as will be shown later, that agents' knowledge can be computed via the operation of variable forgetting.

Knowledge Structure
where (1) V is a set of propositional variables; (2) Γ is a consistent set of propositional formulas over V ; and (3) for each agent i, O i ⊆ V .
The variables in O i are called agent i's observable variables.An assignment that satisfies Γ is called a state of knowledge structure F. Given a state s of F, we define agent i's local state at state s as s ∩ O i .Two knowledge structures are said to be equivalent if they have the same set of propositional variables, the same set of states and, for each agent i, the same set of agent i's observable variables.
A pair (F, s) of knowledge structure F and a state s of F is called a scenario.
and a set V of subsets of V , we use E V to denote a relation between two assignments s, s on V satisfying Γ such that (s, s ) ∈ E V iff there exists a P ∈ V with s ∩ P = s ∩ P .We use E * V to denote the transitive closure of A simple instance of knowledge structure is F 0 = ({p, q}, {p ⇒ q}, {p}, {q}), where p, q are propositional variables.There are two agents for knowledge structure F 0 .Variables p and q are observable to agents 1 and 2, respectively.We have that V {1,2} = {{p}, {q}}; and for any two subsets s and s of {p, q} that satisfy p ⇒ q, we have that (s, s ) ∈ E * V {1,2} .We now give the semantics of language L C n based on scenarios.

Definition 9
The satisfaction relationship |= between a scenario (F, s) and a formula ϕ is defined by induction on the structure of ϕ.

(F, s) |=
We say that a proposition formula in We say that a formula α is realized in knowledge structure F, if for every state s of F, (F, s) |= α.For convenience, by F |= α, we denote formula α is realized in knowledge structure F.
We conclude this subsection by the following lemma, which will be used in the remains of this paper.
Lemma 10 Let V be a finite set of variables, 6. for any formulas α 1 and α 2 , (F, s) 7. for any formula α and i ∈ ∆, (F, s) Proof: 1.The first item of this proposition can be proved by induction on the structure of ψ.
When ψ is a primitive proposition, it is done by the first item of Definition 9. When ψ is of the form of negation or conjunction, the conclusion also follows immediately by the first item of Definition 9.
2. The second item of this proposition can be proved by the first item and the fact s satisfies Γ. 4. Suppose that, for each i ∈ ∆, there exists an i-local formula logically equivalent to β under Γ.We need to show (F, s)

Given an
To prove that (F, s) |= C ∆ β, we need to show that for every assignment s such that (s, s ) ∈ E * V ∆ , (F, s ) |= β.From the definition of E * V ∆ , it suffices to show that for every finite sequence of assignments s 0 , • • • , s k with s 0 = s and (s j , s j+1 ) ∈ E V ∆ (0 ≤ j < k), we have that for every j ≤ k, (F, s j ) |= β.We show this by induction on j.When j = 0, the result is clearly true.Assume (F, s j ) |= β.Now we prove (F, On the other hand, we have that s j |= β iff s j+1 |= β because β is equivalent under Γ to an i-local formula.Hence, (F, s j+1 ) |= β as desired.Given a knowledge structure

It suffice to show
1. W is the set of all states of F; 2. for each w ∈ W , the assignment π(w) is the same as w; and 3. for each agent i and assignments w, w ∈ W , we have that The following proposition indicates that a knowledge structure can be viewed as a specific Kripke structure.
Proposition 11 Given a knowledge structure F, a state s of F, and a formula α in Proof: Immediately by the definition of the satisfaction relationship between a scenario and a formula and that between a situation and a formula. 2 From Proposition 11, we conclude that if a formula in L C n is satisfiable in some knowledge structure, then the formula is also satisfiable in some Kripke structure.From the following proposition and Lemma 7, we can get that if a formula in L C n is satisfiable in some Kripke structure, then the formula is also satisfiable in some knowledge structure.
Proposition 12 For a finite S5 n Kripke structure M with the propositional variable set V and possible world w in M , there exists a knowledge structure F M and a state s w of F such that, for every formula α ∈ L C n (V ), we have that (F M , s w ) |= α iff (M, w) |= α.
2. for each i (0 < i ≤ n), the number of all subsets of O i is not less than that of all equivalence classes of R i .
By the latter condition, there is, for each i, a function g i : W → 2 O i such that for all w 1 , w 2 ∈ W , g i (w 1 ) and g i (w 2 ) are the same subset of O i iff w 1 and w 2 are in the same equivalence class of The following two claims hold: C1 For all w 1 , w 2 ∈ W , and i (0 and g(w) |= α for all w ∈ W }.
We then get the knowledge structure We now show the following claim: The "if" part of claim C3 is easy to prove.If s = g(w ) for some w ∈ W , then by the definition of Γ M , we have that g(w ) |= Γ M and hence g(w ) is a state of F M .To show the "only if" part, assume that for every w ∈ W , s = g(w).Then, for every w ∈ W , there exists α w over V such that s |= α w but g(w) |= ¬α w .Therefore, s |= w∈W α w .Moreover, we have that, for every w ∈ W , g(w ) |= w∈W ¬α w , and hence w∈W ¬α w ∈ Γ M .Consequently, we have that s |= Γ M and hence s is not a state of F M .
To complete the proof, it suffices to show, for every With conditions C1, C2 and C3, we can do so by induction on α.For the base case, we assume α is a propositional variable, say p.Then, by condition C2, we have that (F M , g(w) Suppose that α is not a propositional variable and the claim holds for every subformula of α.There are three cases: 1. α is of form ¬β or β ∧ γ.This case can be dealt with by the definitions of satisfaction relations directly.

α is of form
Therefore, by the induction assumption, we have (F M , g(w)) Recall that, for arbitrary two states s and Propositions 11 and 12 show that the satisfiability issue for a formula in the language of multi-agent S5 with the common knowledge modality is the same whatever satisfiability is meant w.r.t. a standard Kripke structure or w.r.t. a knowledge structure.

Knowledge as Weakest Sufficient Conditions
The following theorem establishes a bridge between the notion of knowledge and the notion of weakest sufficient and strongest necessary conditions.
, while the other part comes in a straightforward way by duality between WSCs and SNCs.Because W SC α i is a sufficient condition of α under Γ, we have Γ |= W SC α i ⇒ α.Let θ be the conjunction of all formulas in Γ, then we have where θ is the same as above.By Proposition 6, we have Γ On the other hand, we know that (F, s) The following corollary characterizes the subjective formulas K i α (where α is objective) which are satisfied in a given knowledge structure.
structure with n agents, and α a formula over V .Then, for every state s of F, Proof: Immediately by Theorem 13 and Proposition 6. 2 Example 15 : Now we consider the communication scenario between Alice and Bob addressed in section 1 once again.To show how our system can deal with the knowledge reasoning issue in this scenario, we define a knowledge structure F as follows: where and • θ is the conjunction of the following three formulas: we would like to know whether Alice knows that Bob received the message.Consider the formula
From Definition 1, the above formula is simplified as Alice recv ack, which, obviously, is satisfied in the scenario (F, s), i. e. , (F, s) |= Alice recv ack.
Then from Corollary 14, we have From item 3 of lemma 10, it follows that which indicates that Alice knows that she sent the message and she knows that she received acknowledgement from Bob. 2 Given a set of states S of a knowledge structure F and a formula α, by (F, S) |= α, we mean that for each s ∈ S, (F, s) |= α.The following proposition presents an alternative way to compute an agent's knowledge.
. Also it is easy to see that for state The intuitive meaning behind Proposition 16 is that if all we know about the current state is ψ, then all we know about agent i's knowledge (or agent i's observations) is the strongest necessary condition of ψ over O i .
The following proposition provides a method to determined whether a formula with the nested depth of knowledge operators (like where α is a propositional formula) is always true in those states, where a given proposition formula ψ is true.
knowledge structure with n agents, α and ψ two formulas over V , and S ψ denotes the set of states s of F such that (F, s) |= ψ.Then, for each group of agents i where ψ k is defined inductively as follows: and for each j < k, Proof: We show this proposition by induction on the nested depth of knowledge operations.The base case is implied directly by Proposition 16.Assume that the claim holds for those cases with nested depth k, we want to show it also holds when the nested depth is By Proposition 16, we have By the inductive assumption, we have that Combining two assertions above, we get

2
When we consider the case where the nested depth of knowledge operators is no more than 2, we get the following corollary.
Corollary 18 Let V, F, α, ψ and S ψ be as in Proposition 17.Then, for each agent i and each agent j, we have Proof: Immediately from Proposition 17. 2 As will be illustrated in our analysis of security protocols (i.e.Section 6), the part 2 of Corollary 18 is useful for verifying protocol specifications with nested knowledge operators.Given a background knowledge base θ, when we face the task of testing whether K j K i α holds in those states satisfying ψ, by part 2 of Corollary 18, we can first get which is a strongest necessary condition of ψ over O j .This is all we know about what agent j observes from ψ. Then we compute φ 2 = ∃(V − O i )(θ ∧ φ 1 ), i. e. , the strongest necessary condition of φ 1 over O i which is, from the viewpoint of agent j, about what agent i observes.In this way, the task of checking K j K i α is reduced to a task of checking θ ∧ φ 2 ⇒ α.
The following corollary gives two methods to check the truth of K i α (where α is a propositional formula) in all those states where a given formula ψ is true.One is via the strongest necessary condition of ψ and the other is via the weakest sufficient condition of α.
Corollary 19 Let V be a finite set of propositional variables and a knowledge structure with n agents, α and ψ two formulas over V .Suppose that S ψ denotes the set of all states s of F such that (F, s) |= ψ, and SN C ψ i and W SC α i are a strongest necessary condition of ψ over O i and a weakest sufficient condition of α over O i under {θ} respectively.Then The first part of the corollary follows from Theorem 13 and Lemma 10, while the second part follows immediately by Proposition 16. 2 In our analysis of security protocols, we observe that very often, it seems more efficient to check an agent's knowledge via the second part of Corollary 19 rather than via the first part.But this may not be always true for some other applications (e.g.see the example of the muddy children puzzle in the next section).

Common Knowledge
Common knowledge is a special kind of knowledge for a group of agents, which plays an important role in reasoning about knowledge (Fagin et al., 1995).A group of agents ∆ commonly know ϕ when all the agents in ∆ know ϕ, they all know that they know ϕ, they all know that they all know that they know ϕ, and so on ad infinitum.We recall that common knowledge can be characterized in terms of Kripke structures.Given a Kripke structure M = (W, π, K 1 , • • • , K n ), a group ∆ of agents commonly know ϕ ( or in modal logic language, C ∆ ϕ is true ) in a world w iff ϕ is true in all worlds w such that (w, w ) ∈ C ∆ , where C ∆ denotes the transitive closure of i∈∆ K i .
In this section, we generalize the concept of weakest sufficient and strongest necessary conditions so that they can be used to compute common knowledge.

Generalized Weakest Sufficient and Strongest Necessary Conditions
The following gives a generalized notion of weakest sufficient conditions and strongest necessary conditions.Definition 20 Given a set of formulas Γ over V as a background knowledge base.Let α be a formula over V , and V a nonempty set of subsets of V .
• A formula ϕ is called V-definable under Γ (or simply called V-definable if there is no confusion in the context), if for each P ∈ V, there is a formula ψ P over P such that Γ |= ϕ ⇔ ψ P .
V-sufficient condition of α under Γ, and for any other V-sufficient condition ϕ of α under Γ, we have Γ |= ϕ ⇒ ϕ.
V-necessary condition of α under Γ, and for any other V-necessary condition ϕ of α under Γ, we have Γ |= ϕ ⇒ ϕ .
We notice that the notion of V-definability introduced here is a simple elaboration of the notion of V-definability as given in the work of Lang and Marquis (1998 Moreover, it is easy to see that the formulas implied by Γ or inconsistent with it are exactly the formulas ∅-definable under Γ, and that definability exhibits a monotonicity property: if φ is V -definable under Γ, then φ is V -definable under Γ for each superset V of V (Lang & Marquis, 1998).Observe also that φ is V -definable under Γ iff ¬φ is V -definable under Γ, and this extends trivially to V-definability.
The following lemma says that the notions of weakest V-sufficient conditions and strongest V-necessary ones are dual to each other.
Lemma 21 Given a set of formulas Γ over V as a background knowledge base, and V a set of subsets of V .Let ϕ and α be formulas over V .Then, we have that ϕ is a weakest V-sufficient condition of α under Γ iff ¬ϕ is a strongest V-necessary condition of ¬α under Γ.
Proof: Straightforward by the duality between WSCs and SNCs. 2 To give some intuition and motivation of the above definition, let us consider the following example.
Example 22: Imagine that there are two babies, say Marry and Peter, playing with a dog.Suppose the propositions "The dog is moderately satisfied" (denoted by m, for short) and "The dog is full"(f ) are understandable to Marry, and the propositions "The dog is hungry" (h) and "The dog is unhappy"(u) are understandable to Peter.
The first claim is easy to check by the definition.The last two claims follow immediately if we can prove that all the V-definable propositions under Γ are f alse, true, h and ¬h (up to logical equivalence under Γ).There are 8 propositions over V 1 up to logical equivalence.The 8 propositions are: true, f alse, m, ¬m, f, ¬f, m ∨ f, ¬m ∧ ¬f .Similarly, there are 8 propositions over V 2 up to logical equivalence under Γ, i.e., true, f alse, h, ¬h, u, ¬u, h ∨ ¬u, ¬h ∧ u.However, we can find, between the two classes of propositions, only 4 pairs of equivalence relations under Γ, i.e., Γ |= true ⇔ true, Let Γ be a set of formulas, V a set of propositional variables, and V a set of subsets of V .The following proposition gives the existence of weakest V-sufficient and strongest V-necessary conditions.For a given formula α over V , a weakest V-sufficient condition φ 1 of α and a strongest V-necessary condition φ 2 of α can be obtained in the proposition.Indeed, the set of assignments satisfying φ 1 and that of assignments satisfying φ 2 can be given in terms of relation E V .
Proposition 24 Given a finite set V of propositional variables, a set Γ of formulas over V as a background knowledge base, a formula α over V , and a set V of subsets of V .Denote by S α W SC the set of assignments s over V such that s |= Γ, and for all assignments s satisfying Γ with (s, s ) ∈ E * V , s |= α.Also denote by S α SN C the set of assignments s over V such that s |= Γ, and there exists an s such that s |= Γ, s |= α and (s, s ) ∈ E * V .Then, the following two points hold.
• If a formula is satisfied exactly by those assignments in S α W SC , then the formula is a weakest V-sufficient condition of α under Γ; and • If a formula is satisfied exactly by those assignments in S α SN C , then the formula is a strongest V-necessary condition of α under Γ.
Proof: We first prove the former point, and then show the other by Lemma 21.Let φ 1 be a propositional formula over V such that, for all assignments s, s |= φ 1 iff s ∈ S α W SC .Then, for every assignment s ∈ S α W SC , we have s |= α because (s, s) ∈ E * V .Thus, φ 1 |= α.We remark that for arbitrarily given formula ϕ over V and assignment s over V , s |= ∀(V − P )ϕ iff for all assignments s over V such that s ∩ P = s ∩ P , we have s |= ϕ.
To prove that φ 1 is V-definable, we show that, for each P ∈ V, φ 1 |= ∀(V − P )φ 1 , which implies that φ 1 is equivalent to the formula ∀(V − P )φ 1 over P .To prove φ 1 |= ∀(V − P )φ 1 , in a semantical way, it suffices to show that, for every assignment s ∈ S α W SC and s |= Γ, if s ∩ P = s ∩ P , then s ∈ S α W SC .Let s and s be given as above and suppose s ∩ P = s ∩ P .Then, (s, s ) ∈ E V .Given an assignment t such that t |= Γ, if (s , t) ∈ E * V , then (s, t) ∈ E * V by (s, s ) ∈ E V .Thus, s ∈ S α W SC .This proves that φ 1 is V-definable.Now we show that φ 1 is a weakest V-sufficient condition under Γ.Suppose φ is a Vdefinable and sufficient condition of α under Γ, we want to prove that Γ |= φ ⇒ φ 1 .The semantical argument of such a proof is as follows.Let s be an assignment with s |= Γ and φ, we must show that s ∈ S α W SC , i.e., for every assignment s with s |= Γ such that (s, s V , there is a finite sequence of assignments s 0 , • • • , s k such that s j |= Γ with s 0 = s and s k = s , and for every j < k, (s j , s j+1 ) ∈ E V .By the V-definability of φ, we know that for every j < k, s j |= φ implies s j+1 |= φ.Thus, we have s |= φ by induction.Now we prove the second point of this proposition by Lemma 21.Let φ 2 be a propositional formula over V such that, for all assignments s, s |= φ 2 iff s ∈ S α SN C .Let θ be the conjunction of formulas in Γ.Then, s |= ¬φ 2 ∧ θ iff for all assignments s with s |= Γ such that sE * V s , we have s |= ¬ϕ.Thus, by the first point of this proposition, we have that ¬φ 2 ∧ θ is a weakest V-sufficient condition of ¬α.Thus, φ 2 ∨ ¬θ and hence φ 2 is a strongest V-necessary condition of α according to Lemma 21. 2 The above proposition can be thought of as a semantical characterization of weakest V-sufficient and strongest V-necessary conditions.

Characterizations with Least and Greatest Fixed Points
We investigate the computation of the weakest V-sufficient and strongest V-necessary conditions by using the notions of a least and a greatest fixed points of an operator, which is introduced as follows.Let V be a set of propositional variables, and Λ be an operator (or a mapping) from the set of propositional formulas over V to the set of propositional formulas over V .We say a ψ is a fixed point of Λ, if |= Λ(ψ) ⇔ ψ.We say a ψ 0 is a greatest fixed point of Λ, if ψ 0 is a fixed point of Λ and for every fixed point ψ of Λ, we have |= ψ ⇒ ψ 0 .Clearly, any two greatest fixed points are logically equivalent to each other.Thus, we denote a greatest fixed point of Λ by gfpZΛ(Z).Similarly, we say a ψ 0 is a least fixed point of Λ, if ψ 0 is a fixed point of Λ and for every fixed point ψ of Λ, we have |= ψ 0 ⇒ ψ.We denote a least fixed point of Λ by lfpZΛ(Z).We say Λ is monotonic, if for every two formulas ψ 1 and ψ 2 such that |= ψ 1 ⇒ ψ 2 , we have |= Λ(ψ 1 ) ⇒ Λ(ψ 2 ).For a finite set V of propositional variables if Λ is monotonic, then there exists a least fixed point and a greatest fixed point (Tarski, 1955). Then, • a weakest V ∆ -sufficient condition of α under {θ} is equivalent to gfp Z(α ∧ Λ 1 (Z)); and • a strongest V ∆ -necessary condition of α under {θ} is equivalent to lfp Z(α ∨ Λ 2 (Z)).
Proof: Let W SC α ∆ be a weakest V ∆ -sufficient condition of α under {θ}.Note that the operator (α ∧ Λ 1 (Z)) is monotonic and thus there exists a greatest fixed point of it.Let ψ 1 = gfp Z(α ∧ Λ 1 (Z)).To prove the first point of this theorem, we must show that . The first point is trivially true because Λ 1 (true) is equivalent to true and W SC α ∆ is a sufficient condition of α under {θ}.To show the second point, suppose The conclusion of the second point follows immediately.
We now show that θ |= By the fact that ψ 1 is a fixed point of the operator (α ∧ Λ 1 (Z)), we have that It follows that |= ψ 1 ⇒ α, and hence θ |= (θ ⇒ ψ 1 ) ⇒ α.To show the other point, for i ∈ ∆, we need to prove that θ ⇒ ψ 1 is equivalent to a formula over O i .By the above, we have that . This completes the second point of the theorem. 2

Common Knowledge as Weakest V-sufficient Conditions
Given a set ∆ of agents and a family V ∆ of observable variable sets of these agents, we investigate the relationship between common knowledge and the weakest V ∆ -sufficient and strongest V ∆ -necessary conditions.
Theorem 26 Let V be a finite set of variables, To show the other direction (F, s) |= C ∆ α ⇒ W SC α ∆ , we consider the formula ψ 1 in the proof of Theorem 25, i.e., the greatest fixed point of the operator Because we already have (F, s) |= ψ 1 ⇒ W SC α ∆ by Theorem 25, it suffices to show (F, s) |= C ∆ α ⇒ ψ 1 .Because the greatest fixed point ψ 1 of the operator Λ can be obtained by a finite iteration of the operator with the starting point Λ(true), we only need to prove that 1. F |= C ∆ α ⇒ Λ(true); and 2. for an arbitrary propositional formula ϕ over V , if The first point is trivially true because Λ(true) is equivalent to α.To prove the second, suppose . Thus, we have that F |= C ∆ α ⇒ K i ϕ by points 5 and 7 of Lemma 10.Hence,

Adding Public Announcement Operator
There is a recent trend of extending epistemic logic with dynamic operators so that the evolution of knowledge can be expressed.The most basic such extension is public announcement logic (PAL), which is obtained by adding an operator for truthful public announcements.The original version of PAL was proposed by Plaza (1989).In this section, we show that public announcement operator can be conveniently dealt with via our notion of knowledge structure.

Public Announcement Logic
Given a set of agents A = {1, . . ., n} and a set V of propositional variables.The language of public announcement logic (P AL n ) is inductively defined as In other words, P AL n is obtained from epistemic logic L C n (V ) by adding public announcement operator [ϕ] for each formula ϕ.Formula [ϕ]ψ means that "after public announcement of ϕ, formula ψ is true." We now give the semantics of public announcement logic under Kripke model.Given a Kripke structure M = (W, π, K 1 , . . ., K n ), the semantics of the new operators is defined as follows.
M, w • π (w )(p) = π(w )(p) for each w ∈ W and each p ∈ V , and There are some sentences that become false immediately after the announcement of them.Consider, for example, the sentence 'p is true but was not commonly known to be true '.By the announcement of the sentence all agents learn that p and therefore p is commonly known.This can be modelled in public announcement logic by valid formula [ϕ]¬ϕ, where ϕ = p ∧ ¬C ∆ p.To see its validity, let (M, w) be an arbitrary situation.If M, w |= ϕ,then M, w |= p, which implies that M | ϕ , w |= C ∆ p, and therefore M | ϕ , w |= ¬ϕ.

Semantics under Knowledge Structure
The semantics of public announcement logic can be conveniently characterized by our notion of knowledge structure.We define the satisfaction relationship |= between a scenario (F, s) and a formula in P AL n .We need only consider those formulas of the form [ϕ]ψ; other cases are the same as in Definition 9.
Let V be a finite set of propositional variables and The semantics definition for the new operators is as follows.First, let F| ϕ be the knowledge structure where θ is a propositional formula on V such that (F, s) |= ϕ iff s satisfies θ.As V is a finite set, such a propositional formula θ always exists.
Then, we set that (F, s) We remark that if formula ϕ is equivalent to propositional one ϕ in knowledge structure F, i.e., F |= ϕ ⇔ ϕ for some propositional formula ϕ , then we can simply define F| ϕ as The following proposition indicates that the semantics of public announcement logic under knowledge structure coincides with that under Kripke model.

Proposition 28 (1) Let V be a finite set of propositional variables and F
For every state s of F and every formula α ∈ P AL n , we have that (F, s) |= α iff the situation (M (F), s) |= α. (2) For a finite S5 n Kripke structure M and possible world w in M , there is a knowledge structure F M and a state s w of F such that, for every formula α ∈ P AL n , we have that (F M , s w ) |= α iff (M, w) |= α.
Proof: (1) Let us proceed by induction on the structure of formula α.We consider only the case that α is of the form [ϕ]ψ; other cases are straightforward by the definitions.
By the definition, we have that (F, s) |= [ϕ]ψ iff (F, s) |= ϕ implies that (F| ϕ , s) |= ψ.Thus, by the inductive assumption, we have that (F, s) First, the set of possible states of M (F| ϕ ) equals to the set of those states s of F with (F, s ) |= ϕ.By the inductive assumption, (F, s ) |= ϕ iff (M (F), s ) |= ϕ.Thus, the set of possible states of M (F| ϕ ) equals to the set of those states s of F with (M (F), s ) |= ϕ, hence equals to the set of possible states of M (F)| ϕ .Second, we have that for each s of F with (M (F), s , where W 0 is a finite set and R 1 , • • • , R n are equivalence relations.We assume also that the set of propositional variables is V 0 .
Let O 1 , • 2. for each i (0 < i ≤ n), the number of all subsets of O i is not less than that of all equivalence classes of R i .
By the latter condition, there is, for each i, a function g i : W 0 → 2 O i such that for all w 1 , w 2 ∈ W 0 , g i (w 1 ) and g i (w 2 ) are the same subset of O i iff w 1 and w 2 are in the same equivalence class of The following two claims hold: C1 For all w 1 , w 2 ∈ W 0 , and i (0 C2 For all w ∈ W 0 and v ∈ V 0 , we have that v ∈ g(w) iff π(w)(v) = true.
For any W ⊂ W 0 , let and g(w) |= α for all w ∈ W }.
We then get a knowledge structure We now show that following claim: C3 For every s ⊆ V , s is a state of F W iff s = g(w) for some w ∈ W.
The "if" part of claim C3 is easy to prove.If s = g(w ) for some w ∈ W , then by the definition of Γ W , we have that g(w ) |= Γ W and hence g(w ) is a state of F M .To show the "only if" part, assume that for every w ∈ W , s = g(w).Then, for every w ∈ W , there exists α w over V such that s |= α w but g(w) |= ¬α w .Therefore, s |= w∈W α w .Moreover, we have that, for every w ∈ W , g(w ) |= w∈W ¬α w , and hence w∈W ¬α w ∈ Γ W . Consequently, we have that s |= Γ W and hence s is not a state of F W .
To complete the proof of the second part, it suffices to show, for every α • π(w)(p) = π 0 (w)(p) for each w ∈ W and each p ∈ V 0 , and With claims C1, C2 and C3, we can do so by induction on α.Again, we consider only the case that α is of the form [ϕ]ψ; other cases can be dealt with in the same way as the proof of Proposition 12.
We first show that knowledge structure F W |ϕ is equivalent to F W , where As the two knowledge structures have the same set V of propositional variables and, for each agent i, the same set O i of observable variables to agent i, we need only to prove that they have the same set of states.
Therefore, by the semantics of the announcement operators in Kripke structure and knowledge structure, we have that The above proposition is a generalization of Propositions 11 and 12 to PAL n , which shows that the satisfiability issue for a formula in the language of multi-agent S5 with the announcement operators is the same whatever satisfiability is meant w.r.t. a standard Kripke structure or w.r.t. a knowledge structure.
Notice that, for every formula in P AL n , we can get an equivalent propositional formula.More specifically, we have the following: Remark 29 Let V be a finite set of propositional variables and Given a formula α ∈ P AL n , we define a propositional formula α θ by induction on the structure of α: Then, for every α ∈ P AL n , we have that F |= α ⇔ α θ .

Complexity Results
We are interested in the following problem: given a knowledge structure F and a formula α in the language of epistemic logic, whether formula α is realized in structure F. This kind of problem is called the realization problem.In this section, we examine the inherent difficulty of the realization problem in terms of computational complexity.In the general case, this problem is PSPACE-Complete; however, for some interesting subset of the language, it can be reduced to co-NP.
Let L be some epistemic logic (or language).The realization problem for L is, given a knowledge structure F and a formula α ∈ L, to determine whether F |= α holds.
The realization problem here is closely related to the model checking problem: given an epistemic formula α and a Kripke structure M , to determine whether M |= α.By checking the definition of Kripke structure semantics for epistemic logic, we can see that the model checking problem can be solved in polynomial time (with respect to the input size (| M | + | α |).We can determine whether a formula α is realized in a knowledge structure F by first translating knowledge structure F into a Kripke structure M then checking M |= α.However, the resulting algorithm will be exponential in space.This is because the size of the corresponding Kripke structure M is exponential with respect to knowledge structure F.
A number of algorithms for model checking epistemic specifications and the computational complexity of the related realization problems were studied in (van der Meyden, 1998).However, like Kripke structure, the semantics framework they adopt is to list all global states explicitly.As a result, the size of the input of the concerned decision problem can be very large.

Proposition 30
The realization problem for L n is PSPACE-complete.
Proof: The proposition is of two parts: the PSPACE-easiness and the PSPACE-hardness.The PSPACE-easiness part means that there is an algorithm that determines in polynomial space whether an epistemic formula α ∈ L n is realized in a knowledge structure F. The PSPACE-completeness indicates that there is a PSPACE-hard problem, say the satisfiability problem for quantified propositional formulas (QBF) (Stockmeyer & Meyer, 1973), can be effectively reduced to the realization problem we consider.
It is not difficult to see the PSPACE-easiness.Given a knowledge structure and epistemic formula α, by Corollary 14, we can replace knowledge modalities by propositional quantifiers in formula α.So, the problem of whether α is realized in F is reduced to determine whether a quantified Boolean formulas is valid.The latter can be done in polynomial space (Stockmeyer & Meyer, 1973).
As for the PSPACE-hardness, it suffices to show that for every QBF formula we can construct a knowledge structure F such that In our picture, we have only two agents: agents 1 and 2. We assign every state an integer number, called the depth of the state for convenience.For every j, d j expresses that the depth of the state is at least j.Propositions d 1 , • • • , d m are observable to agent 1, but not to agent 2. Nevertheless, agent 2 can observe holds, while the formula in item 2d says that, if c does not hold, the depth expressed by d 1 , • • • , d m is less than that by d 1 , • • • , d m and the difference is 1.The formula in item 2b implies that, under the condition that the depth of the state is exactly j, only p j is unobservable to agent 1 and only q j is unobservable to agent 2.
In order to show that it suffices to prove that, for every j ≤ m and propositional formula ϕ over To do so, we need only to show that As for the other direction, we notice that, for each l < m − 1, We also notice that, for each 1 < m ≤ m, By applying the above three claims repeatedly, we can obtain that However, as the QBF formula ∀p 1 ∃q 2 ∀p 2 ∃q 3 • • • ∀p m−1 ∃q m ϕ does not contain any free variable, we immediately conclude that the QBF formula is valid from that QBF formula is satisfiable in F. 2 The idea behind the above translation is that we first translate formula ϕ into a quantified propositional formula, where all the quantifiers are universal ones, and then eliminate those universal quantifiers by introducing new variables.Let V ϕ be the set of new variables in ϕ F .To show the correctness of the translation, it suffices to show that F |= ϕ ⇔ ∀V ϕ ϕ F .
We prove this claim by induction on ϕ.
• It is trivial, if ϕ is a propositional formula.
• If ϕ is of the form ϕ 1 ∧ ϕ 2 , the claim can be obtained immediately by the induction assumption.
Thus, the claim holds by the induction assumption.
• Finally, if ϕ is of the form K i ψ, then Thus, by the induction assumption, we have that and hence Therefore, we have Proposition 31 implies that, for an arbitrary formula ϕ in L +K n and a knowledge structure F with background knowledge base θ, Thus, we can solve the realization problem for formulas in L +K n by using a propositional satisfiability solver.

A Case Study: the Muddy Children Puzzle
In this section, we demonstrate how our framework can be applied to practical problems by using the example of the muddy children puzzle.

Muddy Children Puzzle
The muddy children puzzle is a well-known variant of the wise men puzzle.The story goes as follows (Fagin et al., 1995): Imagine n children playing together.Some of the children, say k of them, get mud on their foreheads.Each can see the mud on others but not on his/her own forehead.Along comes the father, who says, "at least one of you has mud on your forehead."The father then asks the following question, over and over: "Does any of you know whether you have mud on your own forehead?" Assuming that all children are perceptive, intelligent, truthful, and they answer simultaneously, what we want to show is that the first (k − 1) times the father asks the question, they will say "No" but the k th time the children with muddy foreheads will all answer "Yes."

Modeling the Muddy Children Puzzle
To model the muddy children puzzle, let m i be a propositional variable, which means that child i is muddy (i < n).Denote by V the set {m i | i < n}.Suppose the assignment s 0 = {m i | i < k} represents the actual state: child 0, • • •, child k − 1 have mud on their foreheads; and the other children have not.This can be captured by the scenario (F 0 , s 0 ), where Let ϕ = i<n ¬K i m i , which indicates that every child does not know whether he has mud on his own forehead.For convenience, we introduce, for all natural number l, the notations [ϕ] l ψ so that [ϕ] 0 ψ = ψ and [ϕ] l+1 ψ = [ϕ][ϕ] l ψ.The properties we want to show is then formally expressed in P AL n : • [ i<n m i ][ϕ] j ϕ for every 0 ≤ j < k − 1, and Formula [ i<n m i ][ϕ] j ϕ means that the children will all say "No" for the j + 1 th time the father asks the question.In particular, when j = 0, the condition 0 ≤ j < k − 1 is simplified as k > 1; and the resulting formula [ i<n m i ]ϕ says that after the father announces i<n m i every child says "No".Formula [ i<n m i ][ϕ] k−1 i<k K i m i indicates that the k th time the children with muddy foreheads will all answer "Yes." Therefore, what we want to prove is that To check the above, we basically follow the definition of P AL semantics under knowledge structure.During the checking process, a series F j (0 < j ≤ k) of knowledge structures are constructed so that F 1 = F 0 | i<n m i and, for every j (0 < j < k), Figure 1: Performances of the two algorithms for the muddy children puzzle Specifically, we have that, for each step j ≤ k, we get where O i = V − {m i } for each i < n, and Γ j is defined as follows: Therefore, it suffices to verify, for 0 < j < k and i < n, (F j , s 0 ) |= ¬K i m i , and for i < k, (F k , s 0 ) |= K i m i .

Experimental Results
Our framework of knowledge structure has been implemented by using the BDD library (CUDD) developed by Fabio Somenzi at Colorado University.Notice that BDD-based QBF solvers for satisfiability problems are not among the best solvers nowadays.However, in the experiments here we need to compute and represent a serial of Boolean functions (say Γ j ), which are not decision problems and can not be solved by a general QBF solver.
To check agents' knowledge, we implemented two different algorithms in terms of Part 1 and 2 of Corollary 19 in Section 3, respectively.Algorithm 1, which is based on part 1 of Corollary 19, seems much more efficient than Algorithm 2, which is based on part 2 of Corollary 19, for this particular example.The reason is as follows.It is clear that the main task of both algorithms is to check whether (F j , s 0 ) |= K i m i .However, Algorithm 1's method is to compute s 0 |= ∀m i (Γ j ⇒ m i ), while Algorithm 2 is to compute |= ∃m i (Γ j ∧ s 0 ) ⇒ m i .Now the main reason why Algorithm 1 is much more efficient for this particular problem is clear: ∀m i (Γ j ⇒ m i ) is simply equivalent to ¬Γ j ( m i f alse ).Assuming half of the children are muddy, Fig. 1 gives the performances for a Pentium IV PC at 2.4GHz, with 512RAM.In the figure, the x-axis is for the number of children, and the y-axis for the CPU run time in seconds.
The muddy children puzzle as a famous benchmark problem of reasoning about knowledge can be resolved by both proof-theoretic and semantical approaches (Baltag et al., 1998;Gerbrandy, 1999;Lomuscio, 1999).Proof-theoretic approaches depend on efficient provers for multi-modal logics; and semantical ones may suffer from the state-explosion problem.Our approach is essentially a semantic one, but we give a syntactical and compact way to represent Kripke structures by using knowledge structures, and hence may avoid the state-explosion problem to some extent.

Application to Verification of Security Protocols
In this section, we apply our knowledge model to security protocol verification.Security protocols that set up credits of the parties and deal with the distribution of cryptographic keys are essential in communication over vulnerable networks.Authentication plays a key role in security protocols.Subtle bugs that lead to attack are often found when the protocols have been used for many years.This presents a challenge of how to prove the correctness of a security protocol.Formal methods are introduced to establish and prove whether a secure protocol satisfies a certain authentication specification.

Background on Authentication Protocols
Authentication protocols aim to coordinate the activity of different parties (usually referred to as principals) over a network.They generally consist of a sequence of message exchanges whose format is fixed in advance and must be conformed to.Usually, a principal can take part into a protocol run in different ways, as the initiator or the responder ; we often call the principal has different roles.Very often a principal can take part into several protocol runs simultaneously with different roles.
The designers of authentication protocols must have the conscious in mind that the message may be intercepted and someone with malicious intention can impersonate an honest principal.One of the key issues in authentication is to ensure the confidentiality, that is, to prevent private information from being disclosed to unauthorized entities.Another issue is to avoid intruder impersonating other principals.In general, a principal should ensure that the message he receives was created recently and sent by the principal who claims to have sent it.
Cryptography is a fundamental element in authentication.A message transmitted over a channel without any cryptographic converting is called plaintext.The intention of cryptography is to transform a given message to some form that is unrecognizable by anyone except the intended receiver.The procedure is called encryption and the corresponding parameter is known as encryption key.The encoded message is referred to as ciphertext.The reverse procedure is called decryption and uses the corresponding decryption key.The symmetric-key cryptography, which is also called secret-key cryptography, uses the same key for both encryption and decryption.The asymmetric-key cryptography, which is also called public-key cryptography, uses different keys for encryption and decryption.The one for the encryption is the public key that is generally available for anyone.Corresponding to the public key is the private key, which is for the decryption and only owned by one principal.

The Dolev-Yao Intruder Model
The standard adversary model for the analysis of security protocols was introduced by Dolev and Yao in 1983 and is commonly known as Dolev-Yao model (Dolev & Yao, 1983).According to this model, a set of conservative assumptions are made as follows: 1. Messages are considered as indivisible abstract values instead of sequences of bits.
2. All the messages from one principal to any other principals must pass through the adversary and the adversary acts as a general router in the communication.
3. The adversary can read, alter and redirect any message.
4. The adversary can only decrypt a message if he has the right keys, and can only compose new messages from keys and messages that he already possesses.
5. The adversary can not perform any statistical or other cryptanalytic attacks.
Although this model has the drawback of finding implementation dependent attacks, it simplifies the protocol analysis.It has been proved to be the most powerful modeling of the adversary (Cervesato, 2001) because it can simulate any other possible attackers.

The Revised Needham-Schroeder Protocol
As Lowe (1996) pointed out, the Needham-Schroeder protocol has the problem of lacking the identity of the responder and can be fixed by a small modification.However, it is not clear if the revised version is correct.Our approach provides a method to automatically prove the correctness of security protocols instead of just finding bugs as usual analysis tools do for security protocols.
In the cryptography literature, the revised Needham-Schroeder protocol is described as follows: where A → B : M is a notation for "A sends B the message M " or "B receives the message M from A".The notation {M } K means the encryption of M with the key K. Also, A, B denote the principal identifiers; and Ka, Kb indicate, respectively, A's and B's public keys.Moreover, N a and N b are the nonces which are newly generated unguessable values by A and B, respectively, to guarantee the freshness of messages.
Two informal goals or specifications of the protocol are "A knows that B knows A said N a and N a is fresh," and "B knows that A knows B said N b and N b is fresh ." To analyze the protocol, we introduce A and B local histories for the protocol: If A plays the role of the initiator in the protocol, and assumes that B be the responsor, then A's local history is that where "A said M " means that A sent the message M , or other message containing M ; "A sees M " indicates that A receives M or got M by some received messages; B A is the responsor of the protocol from A's local view; Kb A and N b A are, from A's local view, the responsor's public key and nonce, respectively.
If B plays the role of the responsor in the protocol, and assumes A be the initiator, then A's local history is that where A B is the initiator of the protocol from B's local observations; Ka B and N a B are, from B's local view, the initiator's public key and nonce, respectively.
The main point of our analysis is that if an agent is involved in the protocol, then the agent's real observations should be compatible with the so-called local history.For example, if A is the initiator of the protocol, and A sees {B, N a B , N b} Ka , then according to A's local history for the protocol we have that A assumes that B is the responsor of the protocol, the responsor's nonce is N b, and from the responsor's view, the initiator's nonce is N a (see the 4th formula of the background knowledge Γ below).
Let us see how our framework of reasoning about knowledge can be applied to this protocol.
The variable set V RN S consists of the following atoms: • f resh(N a): Nonce N a is fresh.
• f resh(N b): Nonce N b is fresh.
• role(Init, A): A plays the role of the initiator of the protocol.
• role(Resp, B): B plays the role of the responder of the protocol.
• Resp A = B: A assumes that the responder of the protocol is B.
• Init B = A: B assumes that the initiator of the protocol is A.
• N a B = N a: B assumes that the partner's nonce in the execution of the protocol is N a.
• N b A = N b: A assumes that the partner's nonce in the execution of the protocol is N b.
• said(B, N a): B said N a by sending a message containing N a.
• said(A, N b): A said N b.
• sees(B, {N a, A} Kb ): B sees {N a, A} Kb (possibly by decrypting the messages received.) • sees(A, {B, N a B , N b} Ka ): A sees {B, N a B , N b} Ka .
The background knowledge Γ RN S consists of the following formulas: 1. 6.

role(Resp, B)∧ Init
Notice that the first two formulas are required for the rationality of the agents A and B.
The other formulas in Γ can be obtained automatically by some fixed set of meta rules.We obtain the third and fourth formulas by comparing their local history for the protocols to the conditions appearing in the formulas.To get the fifth formula informally, consider A's local history under the conditions role(Init, A) and Resp A = B, which should be that According Let Spec A be the formal specification: and Spec B be the formal specification: It is easy to show that, for all states s of F, (F, s) |= Spec A ∧ Spec B as desired.
We should mention that, in the original Needham-Schroeder protocol (Needham & Schroeder, 1978), the second message is B → A: {N a, N b} Ka instead of B → A: {B, N a, N b} Ka .Therefore, the fourth formula in Γ would be changed to This is why the specifications Spec A and Spec B do not hold for the original Needham-Schroeder protocol.

Discussion
BAN logic (Burrows, Abadi, & Needham, 1990) is one of the most successful logical tools to reason about security protocols.However, the semantics of BAN is always arguable, and it is not clear under what assumption the rules of BAN logic is sound and complete.This motivated the research of seeking more adequate frameworks (models).Providing a modeltheoretic semantics for BAN logic has been a central idea in the development of BAN-like logics such as AT (Abadi & Tuttle, 1991) and SVO (Syversion & van Oorschot, 1996).The advantage of our approach is that we use knowledge structures as semantic models to verify the correctness of epistemic goals for security protocols.
An important problem is that, given a security protocol, where and how the corresponding knowledge structure comes from.To get the knowledge structure corresponding to a security protocol, we have developed a semantic model, and the background knowledge base of the corresponding knowledge structure consists of those formulas valid in the semantic model.Moreover, we can generate the background knowledge systematically.The ongoing work is to implement our approach into a promising automatic security protocol verifier.

Related Work
There are a number of approaches dealing with the concept of variable forgetting or eliminations of middle terms (Boole, 1854) in several contexts.The notion of variable forgetting was formally defined in propositional and first order logics by Lin and Reiter (1994).In recent years, theories of forgetting under answer set programming semantics were proposed (Zhang & Foo, 2006;Eiter & Wang, 2008).Forgetting was also generalized to description logics (Kontchakov, Wolter, & Zakharyaschev, 2008;Wang, Wang, Topor, & Pan, 2008;Kontchakov, Walther, & Wolter, 2009).
In the context of epistemic logic, the notion of forgetting was studied in a number of ways.Baral and Zhang (2006) treated knowledge forgetting as a special form of update with the effect ¬Kϕ ∧ ¬K¬ϕ: after knowledge forgetting ϕ, the agent would neither know ϕ nor ¬ϕ.Ditmarsch, Herzig, Lang and Marquis (2008) proposed a dynamic epistemic logic with an epistemic operator K and a dynamic modal operator [F g(p)] so that formula [F g(p)]ϕ means that after the agent forgets his knowledge about p, ϕ is true.(Zhang & Zhou, 2008) modeled forgetting via bisimulation invariance except for the forgotten variable.This notion of variable forgetting is closely related to quantified modal logics, where the existential variable quantification can be modeled via bisimulation invariance except for the quantified variable (Engelhardt et al., 2003).
The notion of variable forgetting has various applications in knowledge representation and reasoning.For example, Weber (1986) applied it to updating propositional knowledge bases.Lang and Marquis (2002) used it for merging a set of knowledge bases when simply taking their union may result in inconsistency.The notion of variable forgetting is also closely related to that of formula-variable independence, because the result of forgetting the set of variables V in a formula ϕ can be defined as the strongest consequence of ϕ being independent from V (Lang et al., 2003).More recently, Liu and Lakemeyer (2009) applied the notion of forgetting into the situation calculus, and obtained some interesting results about the first-order definability and computability of progression for local-effect actions.

Conclusion
The main contribution of this paper is as follows.First, we have investigated knowledge reasoning within a simple framework called knowledge structure, which consists of a global knowledge base and a set of observable variables for each agent.The notion of knowledge structure can be used as a semantic model for a multi-agent logic of knowledge and common knowledge.In this model, the computation of knowledge and common knowledge can be reduced to the operation of variable forgetting; moreover, an objective formula α is known by agent i at state s when any of its weakest sufficient condition on O i holds at state s.
Second, to capture the notion of common knowledge in our framework, we have generalized the notion of weakest sufficient conditions and obtained, for a set V of sets of propositional variables, the notion of the weakest V-sufficient conditions.Given a set ∆ of agents and a family V ∆ of observable sets of these agents, we have shown that an objective formula α is common knowledge for agents in ∆ iff the weakest {O i | i ∈ ∆}-sufficient of α holds.Also, we have shown that public announcement operator can be conveniently dealt with via our notion of knowledge structure.
Third, the relationship between S5 Kripke structure and knowledge structure has been explored.Specifically, the satisfiability issue for a formula in the language of multi-agent S5 with public announcement operator is the same as what satisfiability is meant w.r.t. a standard Kripke structure or w.r.t. a knowledge structure.
Fourth, we have examined the computational complexity of the problem whether a formula α is realized in structure F. In the general case, this problem is PSPACE-hard; however, there are some interesting subcases in which it can be reduced to co-NP.
Finally, we have shown the strength of the concept of knowledge structure from the practical side by some empirical results about the satisfiability problem for knowledge structures based on the instances of the muddy children puzzle, since even for the smallest instances considered in the experiments generating the corresponding S5 Kripke structure would be out of reach.we have also discussed the automated analysis and verification of the corrected Needham-Schroeder protocol via knowledge structures.
Our work presented in this paper can be further extended in several directions.First, we will investigate whether our knowledge structures can be extended and used as a basis for knowledge based programming (Fagin et al., 1995).Secondly, in our current framework of knowledge structures, we have not considered the issue of only knowing which has been extensively studied in other knowledge reasoning models (Halpern & Lakemeyer, 1996;van der Hock, Jaspars, & Thijsse, 2003;Levesque, 1990).It will be an interesting topic of how our knowledge model handles only knowing in reasoning about knowledge.Thirdly, the notions and methods in this work can be extended to investigate the extension of the variable forgetting operator to multi-agent logics of beliefs.Finally, recent research has shown that knowledge update has many important applications in reasoning about actions and plans and dynamic modeling of multi-agent systems (Zhang, 2003).A first step in this direction (in mono-agent S5) can be found in the work of Herzig, Lang and Marquis (2003).Baral and Zhang have proposed a general model for performing knowledge update based on the standard single agent S5 modal logic (Baral & Zhang, 2001).We believe that their work can be extended to multi-agent modal logics by using the knowledge structure defined in this paper and therefore to develop a more general system for knowledge update.Along this direction, an interesting research issue is to explore the underlying relationship between knowledge forgetting -a specific type of knowledge update, and variable forgetting as addressed in this paper.
Thus, Resp A = B does not necessarily hold under the conditionrole(Init, A) ∧ sees(A, {N a B , N b} Ka ) ∧ said(A, N b) ∧ f resh(N b).
a knowledge structure, and s be a state of F. Also suppose that ∆ ⊆ {1, • • • , n}, and V ∆ = {O i | i ∈ ∆}.Then 1. for any objective formula ψ (i.e., propositional formula over V ), (F, s) |= ψ iff s |= ψ; i-local formula β, it suffices to show (F, s) |= K i β iff (F, s) |= β.By the first item of this proposition, we have that (F, s) |= β iff s |= β.Moreover, as β is i-local or over O i , for all assignments s with s ∩ O i = s ∩ O i , we have that s |= β iff s |= β.Therefore, we get the following three "iff"s: (F, s) |= K i β iff, for all state s of F with s ∩ O i = s ∩ O i , we have that (F, s ) |= β iff, for all state s of F with s ∩ O i = s ∩ O i , we have that s |= β iff s |= β.Thus, (F, s) |= K i β iff (F, s) |= β.
|= (α 1 ⇒ α 2 ) and (F, s ) |= α 1 .Therefore, we get that, for all s of F withs ∩ O i = s ∩ O i , we have (F, s ) |= α 2 .It follows immediately that (F, s) |= K i α 2 .6.This item can be shown in the same way as in the proof of item 5. 7. It suffices to prove that for those state s such that there is a state s with s ∩ O i = s ∩ O i and s E * V ∆ s , we can get sE * V ∆ s , which follows immediately from the fact that E * V ∆ is the transitive closure of E V ∆ . 2 knowledge structure with n agents, ψ a formula over V , and α a formula in L C n .Suppose that SN C ψ i is a strongest necessary condition of ψ over O i under Γ, S ψ denotes the set of those states s of F such that (F, s) |= ψ, and S SN C ψ i denotes the set of those states s such that (F, s) |= SN C ψ i .Then, for each agent i, we have that Therefore, all the V-definable propositions under Γ are f alse, true, h and ¬h (up to logical equivalence under Γ). 2 Example 23: Now we recall the background knowledge Γ CS about the communication scenario between Alice and Bob in the introduction section.Γ CS is the set of the following three formulas: Bob recv msg ⇒ Alice send msg Bob send ack ⇒ Bob recv msg Alice recv ack ⇒ Bob send ack Let O A = {Alice send msg, Alice recv ack}, O B = {Bob recv msg, Bob send ack}, V AB = {O A , O B }. Clearly, if a formula ϕ is logically implied by Γ CS or inconsistent with Γ CS , then ϕ is V AB -definable under Γ CS .Moreover, as in Example 22, we are able to check that there are no V AB -definable formulas other than those implied by Γ CS or inconsistent with Γ CS .
Therefore, given a formula α, a weakest V AB -sufficient condition of α under Γ CS is implied by Γ CS if Γ CS |= α, or inconsistent with Γ CS . 2 which is over O i .This completes the first point of the conclusion of the theorem.We now show the second point of this theorem by using the first point and Lemma 21.
Let SN C α ∆ be a strongest V ∆ -necessary condition of α under {θ}.By Lemma 21, ¬SN C α ∆ is a weakest V ∆ -sufficient condition of ¬α under {θ}.Thus, by the first point of this theorem, ¬SN C |= [ϕ]ψ, we have, by the induction assumption, that (F W , g(w)) |= ϕ iff (M | W , w) |= ϕ.Also, by the claim we just proved above, we have that An assignment s on V is a state of F W |ϕ iff s is a state of F W and F W , s |= ϕ.Thus, by claim C3, s is a state of F W |ϕ iff s = g(w ) for some w ∈ W with F W , g(w ) |= ϕ.On the other hand, we have, by claim C3 again, that assignment s is a state of F W iff s = g(w ) for some w ∈ W , i.e., w ∈ W and M W , w |= ϕ.However, by the induction assumption, F W , g(w ) |= ϕ iff M W , w |= ϕ.Therefore, knowledge structures F W |ϕ and F W have the same set of states.To show (F W , g(w)) |= [ϕ]ψ iff (M | W , w) to A's local history, A sees the nonce N a generated by A itself.Because N a is only said in the message {N a, A} Kb , thus B, who has the inverse key of Kb, must see this message and said N a.Similarly, we can see that the sixth formula holds.The last formula follows immediately by the definition of the protocol.The set O A of the observable variables to A is {f resh(N a), role(Init, A), Resp A = B}.