Algorithms for Generating Ordered Solutions for Explicit AND/OR Structures

We present algorithms for generating alternative solutions for explicit acyclic AND/OR structures in non-decreasing order of cost. The proposed algorithms use a best first search technique and report the solutions using an implicit representation ordered by cost. In this paper, we present two versions of the search algorithm -- (a) an initial version of the best first search algorithm, ASG, which may present one solution more than once while generating the ordered solutions, and (b) another version, LASG, which avoids the construction of the duplicate solutions. The actual solutions can be reconstructed quickly from the implicit compact representation used. We have applied the methods on a few test domains, some of them are synthetic while the others are based on well known problems including the search space of the 5-peg Tower of Hanoi problem, the matrix-chain multiplication problem and the problem of finding secondary structure of RNA. Experimental results show the efficacy of the proposed algorithms over the existing approach. Our proposed algorithms have potential use in various domains ranging from knowledge based frameworks to service composition, where the AND/OR structure is widely used for representing problems.


Introduction
The use of AND/OR structures for modeling and solving complex problems efficiently has attracted a significant amount of research effort over the last few decades. Initially, AND/OR search spaces were mostly used in problem reduction search for solving complex problems, logical reasoning and theorem proving, etc., where the overall problem can be hierarchically decomposed into conjunction and disjunction of subproblems (Pearl, 1984;Nilsson, 1980). Subsequently, AND/OR structures were also applied in a variety of domains, e.g., for representing assembly plans (Homem de Mello & Sanderson, 1990), generating VLSI floor-plans (Dasgupta, Sur-Kolay, & Bhattacharya, 1995), puzzle solving (Fuxi, Ming, & Yanxiang, 2003), etc. Traditionally the algorithm AO* (Pearl, 1984;Nilsson, 1980;Martelli & Montanari, 1978, 1973Chang & Slagle, 1971) has been used for searching implicitly defined AND/OR structures. An empirical study of AO* can be found in Bonet and Geffner's (2005) work.
In the recent past there has been a renewed research interest towards the application of AND/OR structures. In various planning problems, including conditional planning to handle uncertainty, the AND/OR structure (Russell & Norvig, 2003) is a natural form for representation. The problem of generating solutions for such representations has been studied extensively (Hansen & Zilberstein, 2001;Jiménez & Torras, 2000;Chakrabarti, 1994). Dechter and Mateescu (2007) have presented the explicit AND/OR search space perspective for graphical models. Different search strategies (best first, branch and bound, etc.) over the AND/OR search spaces in graphical models are discussed by Dechter (2007b, 2006). AND/OR search spaces are also used for solving mixed integer linear programming (Marinescu & Dechter, 2005), 0/1 integer Programming (Marinescu & Dechter, 2007a), combinatorial optimization in graphical models (Marinescu & Dechter, 2009a, 2009b. AND/OR Multivalued Decision Diagrams (AOMDD), which combine the idea of Multi-Valued Decision Diagrams(MDD) and AND/OR structures, is presented by Mateescu, Dechter, and Marinescu (2008) and further research along this direction can be found in the work of . AND/OR search spaces are also applied for solution sampling and counting (Gogate & Dechter, 2008). Smooth Deterministic Decomposable Negative Normal Forms (sd-DNNF) (Darwiche, 2001) exhibit explicit AND/OR DAG structure and have been used for various applications including compiling knowledge (Darwiche, 1999), estimating belief states (Elliott & Williams, 2006), etc.
Apart from the domains of planning, constraint satisfaction, knowledge based reasoning, etc., AND/OR structure based techniques are also widely used for various application based domains, e.g., web service composition (Gu, Xu, & Li, 2010;Shin, Jeon, & Lee, 2010;Gu, Li, & Xu, 2008;Ma, Dong, & He, 2008;Yan, Xu, & Gu, 2008;Lang & Su, 2005), vision and graphics tasks (Chen, Xu, Liu, & Zhu, 2006), etc. Lang and Su (2005) have described an AND/OR graph search algorithm for composing web services for user requirements. Ma et al. (2008) have advocated the use of AND/OR trees to capture dependencies between the inputs and outputs of the component web services and propose a top-down search algorithm to generate solutions of the AND/OR tree. Further research that uses AND/OR structures in the context of web service composition can be found in the works of Gu et al. (2010Gu et al. ( , 2008, Shin et al. (2010) and Yan et al. (2008). Chen et al. (2006) have applied explicit AND/OR structures for cloth modeling and recognition which is an important problem in vision and graphics tasks.
Such recent adoption of AND/OR search spaces for a wide variety of AI problems warrants further research towards developing suitable algorithms for searching AND/OR structures from different perspectives. In the general setting, the fundamental problem remains to find the minimum cost solution of AND/OR structures. For a given explicit AND/OR graph structure, the minimum cost solution is computed using either a topdown or a bottom-up approach. These approaches are based on the principle of dynamic programming and have complexity which is linear with respect to the size of the search space. Finding a minimum cost solution of an explicit AND/OR structure is a fundamental step for the approaches that use an implicit representation and systematically explore the search space. This is particularly the case for AO* (Nilsson, 1980) where the potential solution graph (psg) is recomputed every time from the current explicit graph after a node is expanded. In view of recent research where AND/OR structures are used and leveraged in a wide variety of problems ranging from planning domain to web service composition, the need for generating an ordered set of solutions of a given AND/OR structure becomes imminent. We briefly mention some areas where ordered solutions are useful.
Ordered set of solutions of an explicit AND/OR DAG can be used to develop useful variants of the AO* algorithm. Currently in AO*, only the minimum cost solution is computed whereas several variants of the A* algorithm exist, where solutions are often sought within a factor of cost of the optimal solution. These approaches (Ebendt & Drechsler, 2009;Pearl, 1984) were developed to adapt the A* algorithm for using inadmissible heuristics, leveraging multiple heuristics (Chakrabarti, Ghose, Pandey, & DeSarkar, 1989), generating solutions quickly within bounded sub-optimality, etc. Typically these techniques order the Open list using one evaluation function, and the next element for expansion is selected from an ordered subset of Open using some other criterion. Similar techniques can be developed for AO* search if ordered set of potential solutions are made available. That set can be used for node selection and expansion instead of expanding nodes only from the current best psg. This opens up an interesting area with significant research potential where the existing variations of the A* algorithm can be extended for AND/OR search spaces.
In the context of model based programming, the problem of finding ordered set of solutions has significant importance. Elliott (2007) has used valued sd-DNNFs to represent the problem and proposed an approach to generate k-best solutions. Since valued sd-DNNFs have an AND/OR structure, the proposed approach is possibly the earliest algorithm for generating ordered set of solutions of an AND/OR structure. The problem of finding ordered set of solutions for graphical models is studied by Dechter (2011, 2010). However these techniques use alternative representations for the algorithm, where AND/OR search spaces can be constructed (Dechter & Mateescu, 2007) for graphical models. Recent research involving AOMDD based representation on weighted structures suggested future extensions towards generalizing Algebraic Decision Diagrams and introduces the notion of cost in AOMDDs. We envisage that ordered set of solutions finds useful applications in the context of research around AND/OR decision diagram based representation.
In the domain of service composition, the primary motivation behind providing a set of alternative solutions ordered by cost is to offer more choices, while trading off the specified cost criterion (to a limited extent) in favor of other 'unspecified' criteria (primarily from the standpoint of quality). Shiaa, Fladmark, and Thiell (2008) have presented an approach for generating a ranked set of solutions for the service composition problem. Typically the quality criteria are subjective in nature and difficult to express in terms of a single scalar cost function which is able to combine the cost/price and the quality aspects together. These aspects of quality are often encountered in the context of serving custom user requirements where the user prefers to minimize the cost/price of the solution while preserving his/her preferences. For example, for booking a holiday package for a specific destination, a travel service portal typically offers a list of packages with various combinations of attractions, hotel options and meal plans ordered by a single cost criterion, namely, the cost of the package. In general any product/solution that is composed of a number of components has a compositional flavor similar to service composition and it becomes important to present the user a set of alternative solutions ordered by cost so that he/she can select the best alternative according to his/her preferences.
Dynamic programming formulations typically have an underlying AND/OR DAG structure, which had been formally studied in the past (Martelli & Montanari, 1973). Besides classical problems like matrix chain multiplication, many other real world optimization problems offer dynamic programming formulations, where alternative solutions ordered by cost are useful in practice. One example of such a problem is finding the secondary structure of RNA (Mathews & Zuker, 2004) which is an important problem in Bioinformatics. RNAs may be viewed as sequences of bases belonging to the set {Adenine(A), Cytocine(C), Guanine(G), Uracil(U)}. RNA molecules tend to loop back and form base pairs with itself and the resulting shape is called the secondary structure. The primary factor that influences the secondary structure of RNA is the number of base pairings (higher number of base pairings generally implies more stable secondary structure). Under the well established rules for base pairings, the problem of maximizing the number of base pairings has an interesting dynamic programming formulation. However, apart from the number of base pairings, there are other factors that influence the stability, but these factors are typically evaluated experimentally. Therefore, for a given RNA sequence, it is useful to compute a pool of candidate secondary structures (in decreasing order of the number of base pairings) that may be subjected to further experimental evaluation in order to determine the most stable secondary structure.
The problem of generating ordered set of solutions is well studied in other domains. For discrete optimization problems, Lawler (1972) had proposed a general procedure for generating k-best solutions. A similar problem of finding k most probable configurations in probabilistic expert systems is addressed by Nilsson (1998). Fromer and Globerson (2009) have addressed the problem of finding k maximum probability assignments for probabilistic modeling using LP relaxation. In the context of ordinary graphs, Eppstein (1990) has studied the problem of finding k-smallest spanning trees. Subsequently, an algorithm for finding k-best shortest paths has been proposed in Eppstein's (1998) work. Hamacher and Queyranne (1985) have suggested an algorithm for k-best solutions to combinatorial optimization problems. Algorithms for generating k-best perfect matching are presented by Chegireddy and Hamacher (1987). Other researchers applied the k-shortest path problem to practical scenarios, such as, routing and transportation, and developed specific solutions (Takkala, Borndörfer, & Löbel, 2000;Subramanian, 1997;Topkis, 1988;Sugimoto & Katoh, 1985). However none of the approaches seems to be directly applicable for AND/OR structures. Recently some schemes related to ordered solutions to graphical models (Flerova & Dechter, 2011 and anytime AND/OR graph search (Otten & Dechter, 2011) have been proposed. Anytime algorithms for traditional OR search space (Hansen & Zhou, 2007) are well addressed by the research community.
In this paper, we address the problem of generating ordered set of solutions for explicit AND/OR DAG structure and present new algorithms. The existing method, proposed by Elliott (2007), works bottom-up by computing k-best solutions for the current node from the k-best solutions of its children nodes. We present a best first search algorithm, named Alternative Solution Generation (ASG) for generating ordered set of solutions. The proposed algorithm maintains a list of candidate solutions, initially containing only the optimal solution, and iteratively generates the next solution in non-decreasing order of cost by selecting the minimum cost solution from the list. In each iteration, this minimum cost solution is used to construct another set of candidate solutions, which is again added to the current list. We present two versions of the algorithma. Basic ASG (will be referred to as ASG henceforth) : This version of the algorithm may construct a particular candidate solution more than once; b. Lazy ASG or LASG : Another version of ASG algorithm that constructs every candidate solution only once. In these algorithms, we use a compact representation, named signature, for storing the solutions. From the signature of a solution, the actual explicit form of that solution can be constructed through a top-down traversal of the given DAG. This representation allows the proposed algorithms to work in a top-down fashion starting from the initial optimal solution. Another salient feature of our proposed algorithms is that these algorithms work incrementally unlike the existing approach. Our proposed algorithms can be interrupted at any point of time during the execution and the set of ordered solutions obtained so far can be observed and subsequent solutions will be generated when the algorithms are resumed again. Moreover, if an upper limit estimate on the number of solutions required is known a priori, our algorithms can be further optimized using that estimate.
The rest of the paper is organised as follows. The necessary formalisms and definitions are presented in Section 2. In Section 3, we address the problem of generating ordered set of solutions for trees. Subsequently in Section 4, we address the problem of finding alternative solutions of explicit acyclic AND/OR DAGs in non-decreasing order of cost. We present two different solution semantics for AND/OR DAGs and discuss the existing approach as well as our proposed approach, along with a comparative analysis. Detailed experimental results, including the comparison of the performance of the proposed algorithms with the existing algorithm (Elliott, 2007), are presented in Section 5. We have used randomly constructed trees and DAGs as well as some well-known problem domains including the 5-peg Tower of Hanoi problem, the matrix-chain multiplication problem and the problem of finding the secondary structure of RNA as test domain. The time required and the memory used for generating a specific number of ordered solutions for different domains are reported in detail. In Section 6, we outline briefly about applying the proposed algorithms for implicitly specified AND/OR structures. Finally we present the concluding remarks in Section 7.

Definitions
In this section, we describe the terminology of AND/OR trees and DAGs followed by other definitions that are used in this paper. G αβ = V, E is an AND/OR directed acyclic graph, where V is the set of nodes and E is the set of edges. Here α and β in G αβ refer to the AND nodes and OR nodes in the DAG respectively. The direction of edges in G αβ is from the parent node to the child node. The nodes of G αβ with no successors are called terminal nodes. The non-terminal nodes of G αβ are of two typesi) OR nodes and ii) AND nodes . V α and V β are the set of AND and OR nodes in G αβ respectively, and n αβ = |V |, n α = |V α |, and n β = |V β |. The start (or root) node of G αβ is denoted by v R . OR edges and AND edges are the edges that emanate from OR nodes and AND nodes respectively.

Definition 2.a [Solution Graph]
A solution graph, S(v q ), rooted at any node v q ∈ V , is a finite sub-graph of G αβ defined as: , then exactly one of its immediate successors in G αβ is in S(v q ); c. If v ′ q is an AND node in G αβ and v ′ q is in S(v q ), then all its immediate successors in G αβ are in S(v q ); d. Every maximal (directed) path in S(v q ) ends in a terminal node; e. No node other than v q or its successors in G αβ is in S(v q ). By a solution graph S of G αβ we mean a solution graph with root v R . ⊓ ⊔ Definition 2.b [Cost of a Solution Graph] In G αβ , every edge e qr ∈ E from node v q to node v r has a finite non-negative cost c e ( v q , v r ) or c e (e qr ). Similarly every node v q has a finite non-negative cost denoted by c v (v q ). The cost of a solution S is defined recursively as follows. For every node v q in S, the cost C(S, v q ) is: is an AND node with degree k, and v 1 , . . . , v k are the immediate successors of v q in S.
Therefore the cost of a solution S is C(S, v R ) which is also denoted by C(S). We denote the optimal solution below every node v q as opt(v q ). Therefore, the optimal solution of the entire AND/OR DAG G αβ , denoted by S opt , is opt(v R ). The cost of the optimal solution rooted at every node v q in G αβ is C opt (v q ), which is defined recursively (for minimum cost objective functions) as follows: where 1 ≤ j ≤ k, v q is an AND node with degree k, and v 1 , . . . , v k are the immediate successors of v q in G αβ .
The cost of the optimal solution S opt of G αβ is denoted by C opt (v R ) or, alternatively, by C opt (S opt ). When the objective function needs to be maximized, instead of the min function, the max function is used in the definition of C opt (v q ).
⊓ ⊔ It may be noted that it is possible to have more than one solution below an OR node v q to qualify to be the optimal one, i.e., when they have the same cost, and that cost is the minimum. Ties for the optimal solution below any such OR node v q are resolved arbitrarily and only one among the qualifying solutions (determined after tie-breaking) is marked as opt(v q ).
An AND/OR tree, T αβ = V, E , is an AND/OR DAG and additionally satisfies the restrictions of a tree structure i.e., there can be at most one parent node for any node v q in T αβ . In the context of AND/OR trees, we use e q to denote the edge that points to the vertex v q . An alternating AND/OR tree,T αβ = V, E , is an AND/OR tree with the restriction that there is an alternation between the AND nodes and the OR nodes. Every child of an AND node is either an OR node or a terminal node, and every children of an OR node is either an AND node or a terminal node. We use the term solution tree to denote the solutions of AND/OR trees.
We also discuss a different solution semantics, namely tree based semantics, for AND/OR DAGs. Every AND/OR DAG can be converted to an equivalent AND/OR tree by traversing the intermediate nodes in reverse topological order and replicating the subtree rooted at every node whenever the in-degree of the traversed node is more than 1. The details are shown in Procedure ConvertDAG. Suppose an AND/OR DAG G αβ is converted to an equivalent AND/OR tree T αβ . We define the solutions of T αβ as the solutions of G αβ under tree based semantics.
Procedure ConvertDAG(G αβ ) input : An AND/OR DAG G αβ output: An equivalent AND/OR tree T αβ Construct a list M , of non-terminal nodes of G αβ , sorted in the reverse topological Replicate the sub-tree rooted at v q with v ′ q as the root;

7
Modify the target node of e t from v q to v ′ q ; 8 end 9 end 10 end 11 In this paper we use the solution semantics defined in Definition 2.a as the default semantics for the solutions of AND/OR DAGs. When the tree based semantics is used, it is explicitly mentioned. The edge costs are shown by the side of each edge within an angled bracket. The cost of the terminal nodes are shown inside a box. For every non-terminal node v q , the pair of costs, c v (v q ) and C opt (v q ), is shown inside a rectangle.

Example
In Figure 1 the optimal solution below every node is shown using by thick dashed edges with an arrow head. The optimal solution of the AND/OR tree can be traced by following these thick dashed edges from node v 1 . The cost of the optimal solution tree is 34. Also, Figure 2 shows an example of a DAG; the cost of the optimal solution DAG is 89.

Generating Ordered Solutions for AND/OR Trees
In this section we address the problem of generating ordered solutions for trees. We use the notion of alternating AND/OR trees, defined in Section 2, to present our algorithms. An alternating AND/OR tree presents a succinct representation and so the correctness proofs are much simpler for alternating AND/OR trees. In Appendix C we show that every AND/OR tree can be converted to an equivalent alternating AND/OR tree with respect to the solution space.
It is worth noting that the search space of some problems (e.g. the search space of multipeg Tower of Hanoi problem) exhibit the alternating AND/OR tree structure. Moreover, the algorithms that are presented for alternating AND/OR trees work without any modification for general AND/OR trees. In this section, first we present the existing algorithm (Elliott, 2007) briefly, and then we present our proposed algorithms in detail.

Existing Bottom-Up Evaluation Based Method for Computing Alternative Solutions
We illustrate the working of the existing method that is proposed by Elliott (2007) for computing alternative solutions for trees using an example of an alternating AND/OR tree. This method (will be referred as BU henceforth) computes the k-best solutions in a bottomup fashion. At every node, v q , k-best solutions are computed from the k-best solutions of the children of v q . The overall idea is as follows. a. For an OR node v q , a solution rooted at v q is obtained by selecting a solution of a child. Therefore k-best solutions of v q are computed by selecting the top k solutions from the entire pool consisting of all solutions of all children. b. In the case of AND nodes, every child of an AND node v q will have at most k solutions.
A solution rooted at an AND node v q is obtained by combining one solution from every child of v q . Different combinations of the solutions of the children nodes of v q generate different solutions rooted at v q . Among those combinations, top k combinations are stored for v q . In Figure 3 we show the working of the existing algorithm. At every intermediate node 2-best solutions are shown within rounded rectangle. At every OR node v q , the i th -best solution rooted at v q is shown as a triplet of the form -i : < child, sol idx > , cost . For example, at node v 1 the second best solution is shown as -2 : v 2 , 2 , 37; which means that the 2 nd best solution rooted at v 1 is obtained by selecting the 2 nd best solution of v 2 . Similarly, at every AND node v q , the i th solution rooted at v q is shown as a triplet of the form -i : |sol vec|, cost triplets. Here sol vec is a comma separated list of solution indices such that every element of sol vec corresponds to a child of v q . The j th element of sol vec shows the index of the solution of j th child. For example, the 2 nd best solution rooted at v 2 is shown as -2 : |2, 1|, 32. This means the 2 nd best solution rooted at v 2 is computed using the 2 nd best solution of the 1 st child (which is v 5 ) and the best solution (1 st ) of the 2 nd child (which is v 6 ). Which index of sol vec corresponds to which child is shown by placing the child node name above every index position. The existing method works with the input parameter k, i.e., the number of solutions to be generated have to be known a priori. Also this method is not inherently incremental in nature, thus does not perform efficiently when the solutions are needed on demand, e.g., at first, top 20 solutions are needed, then the next 10 solutions are needed. In this case the top 20 solutions will have to be recomputed while computing next 10 solutions, i.e., from the 21 st solution to the 30 th solution. Next we present our proposed top-down approach which does not suffer from this limitation.

Top-Down Evaluation Algorithms for Generating Ordered Solutions
So far we have discussed the existing approaches which primarily use bottom-up approach for computing ordered solutions. Now we propose a top-down approach for generating alternative solutions in the non-decreasing order of cost. It may be noted that the top-down approach is incremental in nature. We use an edge marking based algorithm, Alternative Solution Generation (ASG), to generate the next best solutions from the previously generated solutions. In the initial phase of the ASG algorithm, we compute the optimal solution for a given alternating AND/OR treeT αβ and perform an initial marking of all OR edges. The following terminology and notions are used to describe the ASG algorithm. In the context of AND/OR trees, we use e q to denote the edge that points to the vertex v q . We will use the following definitions for describing our proposed top-down approaches.
Definition 3.c [Aggregated Cost] In an AND/OR DAG G αβ , the aggregated cost, c a , for an edge e ij from node v i to node v j , is defined as : c a (e ij ) = c e (e ij ) + C opt (v j ).
⊓ ⊔ Marking of an OR edge : The notion of marking an OR edge is as follows. For an OR node v q , L(v q ) is the list of OR edges of v q sorted in non-decreasing order of the aggregated cost of the edges. We define δ (i,i+1) as the difference between the cost of OR edges, e i and e i+1 , such that e i and e i+1 emanate from the same OR node v q , and e i+1 is the edge next to e i in L(v q ). Procedure MarkOR describes the marking process for the OR edges of an OR node. Intuitively, a mark represents the cost increment incurred when the corresponding edge is replaced in a solution by its next best sibling. The OR edge having maximum aggregated cost is not marked. Consider a solution, S cur , containing the edge e i = (v q , v i ), where e i ∈ E opt (S cur ). We mark e i with the cost increment which will be incurred to construct the next best solution from S cur by choosing another child of v q . In Figure 4 the marks corresponding to OR edges Definition 3.d [Swap Option] A swap option σ ij is defined as a three-tuple e i , e j , δ ij where e i and e j emanate from the same OR node v q , e j is the edge next to e i in L(v q ), and δ ij = c a (e j ) − c a (e i ). Also, we say that the swap option σ ij belongs to the OR node v q . ⊓ ⊔ Consider the OR node v q and the sorted list L(v q ). It may be observed that in L(v q ) every consecutive pair of edges forms a swap option. Therefore, if there are k edges in L(v q ), k−1 swap options will be formed. At node v q , these swap options are ranked according to the rank of their original edges in L(v q ). In Figure 4 the swap options are : σ (2,3) = e 2 , e 3 , 5 , σ (3,4) = e 3 , e 4 , 1 , σ (9,10) = e 9 , e 10 , 3 , σ (11,12) = e 11 , e 12 , 4 , σ (13,14) = e 13 , e 14 , 3 , and σ (14,15) = e 14 , e 15 , 6 . Consider the node v 1 where L(v 1 ) = e 2 , e 3 , e 4 . Therefore, the swap options, σ (2,3) and σ (3,4) , belong to v 1 . At node v 1 , the rank of σ (2,3) and σ (3,4) are 1 and 2 respectively.
Definition 3.e [Swap Operation] Swap operation is defined as the application of a swap option σ ij = e i , e j , δ ij to a solution S m that contains the OR edge e i in the following way: a. Remove the subtree rooted at v i from S m . Let the modified tree be S ′ m . Edge e i is the original edge of σ ij . b. Add the subtree opt(v j ) to S ′ m , which is constructed at the previous step. Let the newly constructed solution be S ′′ m . Edge e j is the swapped edge of σ ij . Intuitively, a swap operation σ ij = e i , e j , δ ij constructs a new solution S ′ m from S m when S m contains the OR edge e i . Moreover, the cost of S ′ m is increased by δ ij compared to cost of S m if C(S m , v i ) = C opt (v i ).
⊓ ⊔ Our proposed algorithms use a swap option based compact representation, named signature, for storing the solutions. Intuitively, any alternative solution can be described as a set of swap operations performed on the optimal solution S opt . It is interesting to observe that while applying an ordered sequence of swap options, σ 1 , · · · , σ k , the application of each swap operation creates an intermediate alternative solution. For example, when the first swap option in the sequence, σ 1 , is applied to the optimal solution, S opt , a new solution, say S 1 , is constructed. Then, when the 2 nd swap option, σ 2 , is applied to S 1 , yet another solution S 2 is constructed. Let S i denote the solution obtained by applying the swap options, σ 1 , · · · , σ i , on S opt in this sequence. Although, an ordered sequence of swap options, like σ 1 , · · · , σ k , can itself be used as a compact representation of an alternative solution, the following key points are important to observe. A. Among all possible sequences that generate a particular solution, we need to preclude those sequences which contain redundant swap options (those swap options whose orig-inal edge is not present in the solution to which it is applied). This is formally defined later as superfluous swap options. Also the order of applying the swap options is another important aspect. There can be two swap options, σ i and σ j where 1 ≤ i < j ≤ k such that the source edge of σ j belongs to the sub-tree which is included in the solution S i only after applying σ i to S i−1 . In this case, if we apply σ j at the place of σ i , i.e., apply σ j directly to S i−1 , it will have no effect as the source edge of σ j is not present in S i−1 , i.e., after swapping the location of σ i and σ j in the sequence, σ j becomes a redundant swap option and the solution constructed would be different for the swapped sequence from the original sequence. We formally define an order relation on a pair of swap options based on this observation in the later part of this section and formalize the compact representation of the solutions based on that order relation. B. Suppose the swap option σ j belongs to a node v p j . Now it is important to observe that the application of σ j on S j−1 to construct S j , invalidates the application of all other swap options that belong to an OR edge in the path from the root node to v p j in the solution S j . This is because in S j the application of any such swap option which belongs to an OR edge in the path from the root node to v p j would make the swap at v p j redundant. In fact, for each swap option σ i belonging to node v p i , where 1 ≤ i ≤ j, the application of all other swap options that belong to an OR edge in the path from the root node to v p i is invalidated in the solution S j for the same reason. This condition restricts the set of swap options that can be applied on a particular solution. C. Finally, there can be two swap options σ i and σ j for 1 ≤ i < j ≤ k such that σ i and σ j are independent of each other, that is, (a) applying σ i to S i−1 and subsequently the application of σ j to S j−1 , and (b) applying σ j to S i−1 and subsequently the application of σ i to S j−1 , ultimately construct the same solution. This happens only when the original edges of both σ i and σ j are present in S i−1 , thus application of one swap option does not influence the application of the other. However, it is desirable to use only one way to generate solution S j . In Section 3.3, we propose a variation of the top-down approach (called LASG) which resolves this issue.
Definition 3.f [Order RelationR] We define an order relation, namelyR, between a pair of swap options as follows. a. If there is a path from v i to v r inT αβ , where e i and e r are OR edges, σ qi and σ rj are swap options, then (σ qi , σ rj ) ∈R. For example, in Figure 4 (σ (3,4) , σ (13,14) ) ∈R. b. If σ pq = e p , e q , δ pq and σ rt = e r , e t , δ rt are two swap options such that v q = v r , then (σ pq , σ rt ) ∈R. In Figure 4 (σ (2,3) , σ (3,4) ) ∈R. ⊓ ⊔ Implicit Representation of the Solutions : We use an implicit representation for storing every solution other than the optimal one. These other solutions can be constructed from the optimal solution by applying a set of swap options to the optimal solution in the following way. If (σ i , σ j ) ∈R, σ i has to be applied before σ j . Therefore, every solution is represented as a sequenceΣ of swap options, where σ i appears before σ j inΣ if (σ i , σ j ) ∈R.
Intuitively the application of every swap option specifies that the swapped edge will be the part of the solution. Since the swap options are applied in the specific orderR, it may so happen that an OR edge which had become the part of solution due to the application of an earlier swap option and may get swapped out due to the application of a later swap option.

Definition 3.g [Superfluous Swap
Option] Consider a sequence of swap optionsΣ = σ 1 , · · · , σ m corresponding to a solution S m . Clearly it is possible for a swap option, σ i , where 1 ≤ i ≤ m, to be present in the sequence such that the original edge of σ i is not present in the solution S i−1 which is constructed by the successive applications of swap options σ 1 , · · · , σ i−1 to solution S opt . Now the application of σ i has no effect on S i−1 , i.e., solution S i is identical to solution S i−1 . Each such swap option σ i is a superfluous swap option with respect to the sequenceΣ of swap options corresponding to solution S m . ⊓ ⊔ This property follows from the definition of superfluous swap options and the notion of the implicit representation of a solution.
Definition 3.h [Signature of a Solution] The minimal sequence of swap options corresponding to a solution, S m , is defined as the signature, Sig(S m ), of that solution. It may be noted that for the optimal solution S opt of any alternating AND/OR treeT αβ , Sig(S opt ) = {}, i.e., an empty sequence. It is possible to construct more than one signature for a solution, asR is a partial order. It is important to observe that all different signatures for a particular solution are of equal length and the sets of swap options corresponding to these different signatures are also equal. Therefore the set of swap options corresponding to a signature is a canonical representation of the signature. Henceforth we will use the set notation for describing the signature of a solution. In Figure 5 we show a solution, say S 2 , of the AND/OR tree shown in Figure 4. The solution is highlighted using thick dashed lines with arrow head. The pair, c v (v q ), C(S 2 , v q ), is shown within rectangles beside each node v q in solution S 2 , and we have used the rectangles with rounded corner whenever C(S 2 , v q ) = C opt (v q ). Since S 2 is generated by applying the swap option σ (2,3) to solution S opt , the signature of S 2 , Sig(S 2 ) = σ (2,3) . Consider another sequence,Σ 2 = σ (2,3) , σ (9,10) , of swap options. It is worth noting thatΣ 2 also represents the solution S 2 . Here the second swap option inΣ 2 , namely σ 9,10 , can not be applied to the solution constructed by applying σ (2,3) to S opt as the source edge of σ (9,10) , e 9 , is not present in that solution. Hence σ (9,10) is a superfluous swap option forΣ 2 .
Definition 3.i [V opt and E opt ] For any solution graph S m of an AND/OR DAG G αβ , we define a set of nodes, V opt (S m ), and a set of OR edges, E opt (S m ), as: The application of all other swap options that belong to the OR edges in the path from the root node to v p i is invalidated in the solution S m . Hence, only the remaining swap options that are not invalidated in S m can be applied to S m for constructing the successor solutions of S m . It is important to observe that for a swap option σ i , if the source edge of σ i belongs to E opt (S m ), the application is not invalidated in S m . Hence, for a solution S m , we construct L(S m ) by restricting the swap operations only on the edges belonging to E opt (S m ). Moreover, this condition also ensures that the cost of a newly constructed solution can be computed directly form the cost of the parent solution and the δ value of the applied swap option. To elaborate, suppose solution S ′ m is constructed form S m by applying σ jk . The cost of S ′ m can be computed directly form C(S m ) and σ jk as : The swap list of the optimal solution, L(S opt ), in Figure 4, is {σ (2,3) , σ (9,10) }. In the solution S 1 , shown in Figure 6, V opt = {v 6 , v 10 }, because except node v 6 and v 10 , for all other nodes v i in S 1 , opt(v i ) = S 1 (v i ). Here also rectangles with rounded corner are used when C(S 1 , v q ) = C opt (v q ). Therefore, E opt = {e 6 , e 10 }. Since there exists no swap option  Figure 4 on the OR edges, e 6 and e 10 , the swap list of solution S 1 , L(S 1 ) = ∅. Hence, for a solution S m , L(S m ) may be empty, though V opt (S m ) can never be empty. Although we use the notation σ ij to denote a swap option with edge e i as the original edge and edge e j as the swapped edge, for succinct representation, we also use σ with a single subscript, such as σ 3 , σ k , σ i j etc., to represent a swap option. This alternative representation of swap options does not relate to any edge.
Property 3.2 For any solution S m of an alternating AND/OR treeT αβ the following statement holds: The property follows from the definitions. One special case requires attention. Consider the case when C(S ′ m ) = C(S m ) and S ′ m ∈ P red(S m ). This case can only arise when a swap option of cost 0 is applied to S m . This occurs in the case of a tie.

ASG Algorithm
We present ASG, a best first search algorithm, for generating solutions for an alternating AND/OR tree in non-decreasing order of costs. The overall idea of this algorithm is as follows. We maintain a list, Open, which initially contains only the optimal solution S opt . At any point of time Open contains a set of candidate solutions from which the next best solution in the non-decreasing order of cost is selected. At each iteration the minimum cost solution (S min ) in Open is removed from Open and added to another list, named, Closed. The Closed list contains the set of ordered solutions generated so far. Then the successor set of S min is constructed and any successor solution which is not currently present in Open as well as is not already added to Closed is inserted to Open. However as a further optimization, we use a sublist of Closed, named TList, to store the relevant portion of Closed such that checking with respect to the solutions in TList is sufficient to figure out whether the successor solution is already added to Closed. It is interesting to observe that this algorithm can be interrupted at any time and the set of ordered solutions computed so far can be obtained. Also, the algorithm can be resumed if some more solutions are needed. The details of ASG algorithm are presented in Algorithm 4. The pseudo-code from Line-1 to Line-4 computes the optimal solution S opt , performs the marking of OR edges, populates the swap options, and initializes Open, Closed and TList. The loop in Line-10 is responsible for generating a new solution every time it is executed as long as Open is not empty. In Line-6 of the ASG algorithm, the solution that is the current minimum cost solution in Open (S min ) is selected and removed from Open. The TList is populated and maintained from Line-7 to Line-10. The loop in Line-13 generates the successor solutions of S min one by one and adds the newly constructed solutions to Open if the newly constructed solution is not already present in Open as well as not added to TList (Line-16 does the checking). The proof of correctness of Algorithm 4 is presented in Appendix A. We discuss the following issues related to Algorithm 4.
Checking for Duplication : In order to check whether a particular solution S i is already present in Open or TList, the signature of S i is matched with the signatures of the solutions that are already present in Open and TList. It is sufficient to check the equality between the set of swap options in the respective signatures because that set is unique for a particular solution. It may be noted that TList is used as an optimization, which avoids searching the entire Closed list.
Resolving Ties : While removing the minimum cost solution from the Open list, a tie may be encountered among a set of solutions. Suppose there is a tie among the set S tie = {S 1 , · · · , S k }. The ties are resolved in the favor of the predecessor solutions, that is, For all other cases the ties are resolved arbitrarily in the favor of the solution which was added to Open first.

Working of ASG Algorithm
We illustrate the working of the ASG algorithm on the example AND/OR tree shown in Figure 4. The contents of the different lists obtained after first few iterations of outermost while loop are shown in Table 1. We use the signature of a solution for representation purpose. The solutions that are already present in Open and also constructed by expanding the current S min , are highlighted with under-braces. (11,12) , σ (13,14) {σ (2,3) , σ (3,4) , σ (11,12) (13,14) , σ (11,12) } Before entering the outermost while loop (Line 5), ASG computes the optimal solution S opt , populates the swap options, and inserts S opt to Open. Thus, at this point of time, Open contains only the optimal solution S opt ; Closed and TList are empty. In the first iteration S opt (the signature of S opt is {}) is selected and removed from Open. Then the swap list of S opt , L(S opt ), is computed. L(S opt ), consists of two swap options, namely σ (2,3) and σ (9,10) . ASG adds two new solutions {σ (2,3) } and {σ (9,10) } to Open. Then solution S opt is added to both Closed and TList.

Technique for Avoiding the Checking for Duplicates in Open
In this section, we present a technique to avoid the checking done before adding a newly constructed solution S m to Open to determine whether S m is already present in Open. We first explain the scenario with an example, which is a portion of the previous example shown in Figure 4. In Figure 7-10, the solutions are shown using thick dashed line with arrow head. Also the rectangles with rounded corner are used to highlight the fact that the corresponding node in the marked solution does not belong to the V opt set of that solution.  We use the following definitions to describe another version of the ASG algorithm, which constructs the solutions in such a way that the check to find out whether a solution is already added to Open is avoided.
Definition 3.l [Solution Space DAG(SSDAG)] The solution space DAG of an alternating AND/OR treeT αβ is a directed acyclic graph (DAG), G s = V, E , where V is the set of all possible solutions of the AND/OR treeT αβ , and E is the set of edges which is defined as:

Definition 3.m [Solution Space Tree and Completeness]
A solution space tree of an alternating AND/OR treeT αβ is a tree where V is the set of all possible solutions of the AND/OR treeT αβ , and E t is the set of edges which is defined as: and e s pm is a directed edge from node S p to S m , and S p ∈ P red(S m ), and It may be noted that the complete solution space tree of an alternating AND/OR tree is not necessarily unique. It is possible for an alternating AND/OR tree to have more than one complete solution space tree. However the solution space DAG for any AND/OR tree is unique.
Definition 3.n [Native Swap Options of a Solution] Consider a solution S m of an alternating AND/OR treeT αβ . Suppose S m is constructed by applying swap option σ ij to solution S p . Since swap option σ ij = e i , e j , δ ij is used to construct S m , AND node v j is present in S m . The native swap options of solution S m with respect to swap option σ ij , N (S m , σ ij ), is a subset of L(S m ), and comprises of the following swap options : a. σ jk , where σ jk is the swap option on the edge e j b. each σ t , if σ t belongs to an OR node v q where v q is a node in S m (v j ) We use the term N (S m ) to denote the native swap options when σ ij is understood from the context. Intuitively the native swap options for solution S m are the swap options that become available immediately after applying σ ij , but were not available in the predecessor solution of S m .

Lazy ASG Algorithm
The intuition behind the other version of the ASG algorithm is as follows. For a newly constructed solution S m , we need to check whether S m is already present in Open because S m can be constructed as a part of computing the successor set of multiple solutions. Instead of using the entire swap list of a solution to construct all successors at once and then add those solutions to Open, using the native swap options for constructing a subset of the successor set ensures the following. The subset constructed using native swap options consists of only those solutions that are currently not present in Open and thus can be added to Open without comparing with the existing entries in Open. The construction of each remaining successor solution S ′ m of S m and then insertion to Open is delayed until every other predecessor solution of S ′ m is added to Closed. Algorithm 5: Lazy ASG (LASG) Algorithm input : An alternating AND/OR treeT αβ output: Alternative solutions ofT αβ in the non-decreasing order of cost Compute the optimal solution S opt , perform OR edge marking and populate the 1 swap options; Create two lists, Open and Closed, that are initially empty; 2 Put S opt in the Closed list; 3 Create a solution space tree T s with S opt as root; while Open is not empty do 10 S min ← Remove the minimum cost solution from Open ; 11 /* Suppose S min is constructed from S m applying swap option σ ij */ Add a node corresponding to S min in T s and connect that node using an edge 12 from S m ; Compute the swap list L(S min ) and the list of native swap options N (S min , σ ij ); 13 /* Expansion using native swap options */ foreach σ tmp ∈ N (S min , σ ij ) do 14 Construct S tmp from S min by applying σ tmp ;

15
Construct the signature of S tmp , Sig(S tmp ), by concatenating σ tmp after 16 Sig(S min ); Add S tmp to Open; The solution space tree T s is maintained throughout the course of the algorithm to determine when every other predecessor of S ′ m is added to Closed. Based on this idea we present a lazy version of ASG algorithm, named LASG. After selecting the minimum cost solution from Open, the algorithm explores the successor set of the current minimum cost solution in a lazy fashion. For a solution S m , at first a subset of Succ(S m ) is constructed using only the native swap options of S m . The other solutions that belong to Succ(S m ) are explored as late as possible as described above. For resolving ties, LASG algorithm uses the same strategy which is used by ASG algorithm. The details of LASG algorithm are presented in Algorithm 5. The proof of correctness of this algorithm is presented in Appendix B.
Consider the example tree shown in Figure 7 and solutions S 1 and S 2 (shown in Figure 9 and Figure 10). Initially the Open will contain only S opt and N (S opt ) = {σ (11,12) , σ (13,14) }. When S opt is selected from Open, both S 1 and S 2 is added to Open. Next S 1 will be selected followed by S 2 . Since, N (S 1 ) = ∅ and N (S 2 ) = ∅, after selecting S 1 or S 2 no successor solutions are constructed using the native swap list. Among the predecessors of S 3 , S 2 is added last to Closed. After selecting and removing S 2 from Open, solution S 3 is constructed from the previously selected predecessor S 1 using the swap option σ (11,12) which is used to construct solution S 2 from S opt .

Complexity Analysis and Comparison among ASG, LASG and BU
In this section we present a complexity analysis of ASG and LASG and compare them with BU. We will use the following parameters in the analysis.
a. n αβ and n β denote the total number of nodes and the number of OR nodes in an alternating AND/OR tree. b. d denotes the out degree of the OR node having maximum number of children. c. m denotes the maximum number of OR edges in a solution. d. o denotes the maximum size of Open. We will present the complexity analysis for generating c solutions. Therefore the size of Closed is O(c).  Space Complexity : Compared to ASG algorithm, LASG algorithm does not maintain the TList. However LASG maintains the solution space tree T s whose size is equal to the Closed list, thus adding another O(c) factor to the space complexity incurred by ASG algorithm. It is interesting to observe that the worst case space complexity remains O(o + n β .d + o.n β .d) = O(o.n β .d) which is equal to the space complexity of ASG algorithm.

Comparison with BU
The time complexity of generating the c best solutions for an AND/OR tree is O(n αβ .c. log c) and the space complexity is O(n αβ .c). The detailed analysis can be found in the work of Elliott (2007). Since, n β .d = O(n αβ ), the space complexity of both ASG and LASG algorithm reduces to O(n αβ .c) and the time complexity of LASG is log c factor better than BU whereas the time complexity of ASG is quadratic with respect to c compared to the (c. log c) factor of BU. When an additional hash-map is used to reduce the time overhead of duplicate checking, ASG beats both LASG and BU both in terms time complexity, as both O(n αβ ) and O √ n αβ .(c. lg c + c. lg n αβ ) is asymptotically lower than O(n αβ .c. log c).
However this worst case complexity is only possible for AND/OR trees where no duplicate solution is generated. Empirical results show that the length of Open, o hardly reaches O(c.m).

Ordered Solution Generation for AND/OR DAGs
In this section, we present the problem of generating solutions in non-decreasing order of cost for a given AND/OR DAG. We present the working of the existing algorithm for generating solution for both tree based semantics and default semantics. Next we present the modifications in ASG and LASG for handling DAG. Figure 12 shows an example working of the existing bottom-up approach, BU, on the AND/OR DAG in Figure 2. We use the notations that are used in Figure 3 to describe different solutions in Figure 12 and the generation of the top 2 solutions under tree-based semantics is shown.

Existing Bottom-Up Algorithm
It is important to notice that although BU correctly generates alternative solutions of an AND/OR DAGs under tree based semantics, BU may generate some solutions which are invalid under default semantics. In Figure 13 we present a solution of the AND/OR DAG in Figure 2. This solution is an example of such a solution which is correct under tree-based semantics but is invalid under default semantics. The solution DAG (highlighted using thick dashed lines with arrow heads) in Figure 13 will be generated as the 3 rd solution of the AND/OR DAG in Figure 2 while running BU. At every non-terminal node, the entry (within rectangle) corresponding to the 3 rd solution is highlighted using bold face. It may be noted that the terminal nodes, v 9 and v 10 , are included in the solution DAG though both of them emanate from the same parent OR node. Therefore, this solution is not a valid one under default semantics. For each newly constructed solution rooted at v q , a top-down traversal of that solution starting from v q is done to check whether more than two edges of an OR node are present in that particular solution (a violation of the default semantics). If such a violation of the default semantics is detected, that solution is pruned from the list of alternative solutions rooted at v q . Therefore, at every AND node, when a new solution is constructed, an additional top-down traversal is used to detect the semantics violation.

Top-Down Method for DAGs
The proposed top-down approaches (ASG and LASG) are also applicable for AND/OR DAGs to generate alternative solution DAGs under default semantics. Only the method of computing the cost increment after the application of a swap option needs to be modified to incorporate the fact that an OR node may be included in a solution DAG through multiple paths from the root node. We use the notion of participation count for computing the cost increment.
Participation Count : The notion of participation count is applicable to the intermediate nodes of a solution DAG as follows. In a solution DAG, the participation count of an intermediate node, v q , is the total number of distinct paths connecting the root node, v R , and v q . For example, in Figure 14, the optimal solution DAG is shown using thick dashed lines with arrow heads, and the participation count for every intermediate OR nodes are shown within a circle beside the node. We use the notation σ ijk to denote a swap option in the context of AND/OR DAGs, where swap option σ ijk belongs to node v i , the source edge of the swap option is e ij from node v i to node v j , and the destination edge is e ik from node v i to node v k .

Modification in the Proposed Top-Down Approach
The ASG algorithm is modified for handling AND/OR DAGs in the following way. The computation of the successor solution in Line 14 of Algorithm 4 is modified to incorporate the participation count of the OR node to which the applied swap option belongs. The overall method is shown in Algorithm 6(in the next page).
In order to apply LASG on AND/OR DAGs, apart from using the above mentioned modification for computing the cost of a newly generated solution, another modification is needed for computing the native swap options for a given solution. The modification is explained with an example. Consider the solution, S 1 , shown in Figure 15. S 1 is highlighted using thick dashed lines with arrow heads. The pair, c v (v q ), C(S 1 , v q ), is shown within rectangles beside each node v q ; rectangles with rounded corner are used when C(S 1 , v q ) = C opt (v q ). Swap option σ (2,5,4) was applied to S opt to generate S 1 . After the application of swap option σ (2,5,4) , the participation count of node v 5 is decremented to 1. Therefore in S 1 there is a path from the root node to node v 5 and so node v 5 is still present in S 1 . As a result, the swap option σ (7,9,10) is available to S 1 with a participation count equal to 1 for node v 7 , whereas σ (7,9,10) is available to its parent solution S opt with participation count 2 for node v 7 . In other words, σ (7,9,10) is not available to S 1 and its parent solution S opt with the same value of participation count for node v 7 . Therefore σ (7,9,10) becomes the native swap option of S 1 . The generalized definition of native swap options for a solution is presented below.

Definition 4.o [Native Swap Options of a Solution]
Consider a solution S m of an AND/OR DAG G αβ , where S m is constructed by applying swap option σ hij to solution S p . Since swap option σ hij = e hi , e hj , δ hij is used to construct S m , AND node v j belongs to S m . Similarly, if the participation count of node v i remains greater than zero after applying σ hij to S m , node v i belongs to S m . The native swap options of solution S m with respect to swap option σ hij , N (S m , σ hij ), a subset of L(S m ), comprises of the following swap options : a. σ hjk , where σ hjk is the swap option on the edge e hj b. each σ t , if σ t belongs to an OR node v q where v q is a node in S m (v j ) c. each σ ′ t , if node v i is present in S m and σ ′ t belongs to an OR node v q where v q is a node in S m (v i ). We use the term N (S m ) to denote the native swap options when σ hij is understood from the context. Intuitively the native swap options for solution S m are the swap options that become available immediately after applying σ hij , but were not available in the predecessor solution of S m . ⊓ ⊔ It is worth noting that Definition 4.o of native swap option is a generalization of the earlier definition of native swap option (Definition 3.n), defined in the context of trees. In the case of trees, the participation count of any node can be at maximum 1. Therefore, after the application of a swap option to a solution, the participation count of the node, to which the original edge of the swap option points to, becomes 0. Therefore the third condition is never applicable for trees. LASG (Algo. 5) can be applied on AND/OR DAGs, with the mentioned modification for computing the cost of a newly generated solution and the general definition of native swap option to generate ordered solutions under default semantics.

Working of ASG and LASG Algorithm on AND/OR DAG
We describe the working of ASG algorithm on the example DAG shown in Figure 2. Before entering the outermost while loop, TList and Closed are empty, and Open contains the optimal solution S opt . The contents of the different lists obtained after first few cycles of outermost while loop are shown in Table 3. Each solution is represented by its signature. The solutions that are already present in Open and also constructed by expanding the current S min , are highlighted with under-braces. For example, the solution {σ (2,5,4) , σ (3,5,6) } which is added to Open in Iteration 2 (while constructing the successor solutions of {σ (2,5,4) }) constructed again in Iteration 5 while expanding solution {σ (3,5,6) }.

Generating Solutions under Tree Based Semantics
Unlike the default semantics, ASG or LASG does not have any straight forward extension for generating solutions under tree based semantics. In Figure 13 we show an example solution which is valid under tree based semantics, but invalid under default semantics, because both OR edges emanating form the OR node v 7 , namely e (7,9) and e (7,10) , are

Experimental Results and Observations
To obtain an idea of the performance of the proposed algorithms and to compare with the existing approach, we have implemented the ASG, LASG and BU (existing bottom-up approach) and tested on the following test domains. a. A set of synthetically generated AND/OR trees; b. Tower of Hanoi (TOH) problem; c. A set of synthetically generated AND/OR DAGs; d. Matrix-chain multiplication problem; and e. The problem of determining the secondary structure of RNA sequences.
It may noted that in our implementation of the ASG algorithm, we have implemented the more space efficient version of ASG algorithm (without a separate hash-map for storing the solutions in Open and Closed, thereby incurring an extra overhead in time for duplication checking). Another important point is that for every test case the reported running time of ASG and LASG for generating a particular number of solutions includes the time required for constructing the optimal solution graph. The details of the different test domains are as follows.

Complete Trees
We have generated a set of complete d-ary alternating AND/OR trees by varying -(a) the degree of the non-terminal nodes (denoted by d), and (b) the height (denoted by h).   Here the objective is to find the alternative gifts in the order of non-decreasing cost. Table 5 shows the time required for generating 100, 300, and 500 solutions for various complete alternating AND/OR trees. We have implemented the ASG, LASG and the existing bottom-up algorithm and the corresponding running time is shown in the column with the heading ASG, LASG and BU, respectively. We have used a time limit of 15 minutes   (2, 7), (3, 5) etc., BU performs better than ASG. On the contrary, for the other combinations, where at least one of these d and h parameters has a high value, e.g., (d, h) combinations like (2, 17), (7, 5), (4, 9) etc., ASG outperforms BU.

Experimentation with Queue with Bounded Length
Since the Open can grow very rapidly, both ASG and LASG incur a significant overhead in terms of time as well as space to maintain the Open list when the number of solutions to be generated is not known a priori. In fact, for ASG checking for duplicates in Open is actually the primary source of time complexity and storing the solutions in Open is a major contributing factor in space complexity. If the number of solutions that have to generated is known a priori, the proposed top-down approach can leverage the fact by using a bounded length queue for implementing Open. When a bounded length queue is used, the time requirement along with space requirement decreases significantly.   We show the effect of using bounded length queue to implement Open in Table 7 (reporting the time requirement) and in Table 8 (reporting the memory usage) for generating 100, 300, and 500 solutions, where the number of solutions to be generated are known beforehand. Table 7 and Table 8 show that in this case both ASG and LASG outperforms BU in terms of time as well as space requirements. Particularly, ASG performs very well in this setting, outperforming LASG in some cases.

Experimentation to Compare the Incremental Nature
The proposed top-down algorithms are incremental in nature whereas the existing bottomup approach is not incremental. After generating a specified number of ordered solutions, our methods can generate the next solution incrementally without needing to restart itself, whereas the existing approach needs to be restarted. For example, after generating the first 10 ordered solutions, ASG and LASG generate the 11 th solution directly from the data structures maintained so far by these algorithms and perform necessary updates to these data structures. Whereas, BU needs to be restarted with input parameter 11 for generating the 11 th solution. In Table 9 we compare the time needed to generate the subsequent 11 th solution and 12 th solution incrementally after generating first 10 solutions. In order to have more clarity in the comparison among the running times of the respective algorithms, we have used higher precision (upto the 6 th decimal place) while reporting the running time in Table 9. Clearly, both ASG and LASG outperform BU for generating the 11 th and 12 th solution in terms of the time requirement.  Table 9: Comparison of running time (in seconds) for generating for first 10 solutions and then the 11 th solution and 12 th solution incrementally for complete alternating AND/OR trees

Multipeg Tower of Hanoi Problem
Consider the problem of Multipeg Tower of Hanoi (Majumdar, 1996;Gupta, Chakrabarti, & Ghose, 1992). In this problem, ρ pegs are fastened to a stand. Initially γ disks rest on the source peg A with small disk on large disk ordering. The objective is to transfer all γ disks from A to the destination peg B with minimum legal moves. In a legal move, the topmost disk from any tower can be transferred to any other peg with a larger disk as the topmost disk. The problem of multi-peg tower of Hanoi can be solved recursively as follows. a. Move recursively the topmost k (k varies from 1 to γ − 1) disks from A to some intermediate peg, I, using all the pegs. b. Transfer the remaining γ − k disks from A to B recursively, using the (ρ − 1) pegs available. c. Recursively move k disks that were transferred to I previously, from the intermediate peg I to B, using all the ρ pegs. It may be noted that there is a choice for the value of k, which may take any value from 1 to γ − 1. Solutions with different values of k may take different number of moves, and the solution which incurs minimum number of moves is the optimal solution. This choice of the value of k is modeled as an OR node, and for every such choice, the problem is divided into three sub-problems. This decomposition into sub-problems is modeled as an AND node. Therefore, the search spaces of the multi-peg Tower of Hanoi problem correspond to alternating AND/OR trees.   We have used the search space of 5 peg Tower of Hanoi problem with different number of disks, γ, and generated alternative solutions in non-decreasing order of cost using ASG and LASG algorithms. Here the cost function expresses the number of legal moves. The value of γ is varied from 8 to 13, and in Table 10 and in Table 11, we report the time required and space required, respectively, for generating 100, 300, and 500 solutions for every test cases. Experimental results show that the performance of ASG is similar to the performance of LASG with respect to both space and time. However ASG as well as LASG outperforms BU with respect to both time and space requirements.

Randomly Constructed AND/OR DAGs
We have constructed a set of randomly generated AND/OR DAGs and evaluated the ASG, LASG, and BU algorithm for generating solutions under default semantics. We have used the proposed extension to the BU algorithm for generating solutions under default semantics.    Table 12 and Table 13 compare the time required and space required for running ASG, LASG and BU for generating 100, 300, and 500 solutions for every test cases. The first and second columns of every row provide the size (n αβ ) and the average out-degree (d) of the DAG. The results obtained for this test domain are similar to the results for randomly constructed AND/OR trees. It may be noted that in terms of both time and space required, LASG outperforms both ASG and BU. Between ASG and BU, for most of the test cases BU performs better than ASG with respect to the time required for generating a specific number of solutions. Whereas, the space requirement of ASG and BU for generating a specific number of solutions has an interesting co-relation with the average degree(d) and the size (n αβ ) parameter of the DAG. For low numerical values of the d and the n αβ parameter, e.g., (n αβ , d) combinations like (60, 2), (33, 3) etc., BU performs better than ASG. On the contrary, for the other combinations, where at least one of these n αβ and d parameter has a high value, e.g., (n αβ , d) combinations like (920, 2), (9624, 3), (40884, 4) etc., ASG outperforms BU.

Matrix-Chain Multiplication Problem
We have also used the well-known matrix-chain multiplication (Cormen, Stein, Rivest, & Leiserson, 2001) problem for experimentation. The search space of the popular dynamic programming formulation of this problem correspond to AND/OR DAG.   Given a sequence of matrices, A 1 , A 2 , · · · , A n , of n matrices where matrix A i has dimension p i−1 × p i , in this problem the objective is to find the most efficient way to multiply these matrices. The classical dynamic programming approach works as follows. Suppose A [i,j] denotes matrix that results from evaluating the product, A i A i+1 · · · A j , and m [i, j] is the minimum number of scalar multiplications required for computing the matrix A [i,j] . Therefore, the cost of optimal solution is denoted by m[i, j] which can be recursively defined as : The choice of the value of k is modeled as OR node and for every such choice, the problem is divided into three sub-problems. This decomposition into sub-problems is modeled as an AND node. It is worth noting that unlike the search space of 5-peg ToH problem, the search space of the matrix-chain multiplication problem corresponds to AND/OR DAG. We have used the search space for different matrix sequences having varying length and generated alternative solutions in the order of non-decreasing cost. In Table 14, we report the time required and in Table 15, we report the memory used for generating 10, 15, and 20 solutions for every test cases.
In Table 14, for each test case, we also report the time required for constructing the explicit AND/OR DAG from the recursive formulation in the 2 nd column, and the optimal solution construction time in the 3 rd column. It is interesting to observe that the relative performance of ASG and LASG for this search space is very similar to that obtained for 5peg ToH search space though this search space for this domain is AND/OR DAG. Both ASG and LASG perform approximately the same with respect to time and space requirement. However, the advantage of ASG as well as LASG over BU with respect to both time and space requirement is more significant in this domain.

Generating Secondary Structure for RNA
Another relevant problem where the alternative solutions play an important role is the computation of the secondary structure of RNA. RNA molecules can be viewed as strings of bases, where each base belongs to the set {Adenine, Cytocine, Guanine, U racil} (also denoted as {A, C, G, U }). RNA molecules tend to loop back and form base pairs with itself and the resulting shape is called secondary structure (Mathews & Zuker, 2004). The stability of the secondary structure largely depends on the number of base pairings (in general, larger number of base pairings implies more stable secondary structure). Although there are other factors that influence the secondary structure, it is often not possible to express these other factors using a cost function and they are typically evaluated empirically. Therefore, it is useful to generate a set of possible alternative secondary structures ordered by decreasing numbering of base pairings for a given RNA which can be further subjected to experimental evaluation.
The computation of the optimal secondary structure considering the underlying principle of maximizing the number of base-pairings has a nice dynamic programming formulation (Kleinberg & Tardos, 2005). Given an RNA molecule B = b 1 b 2 · · · b n where each b i ∈ {A, C, G, U }, the secondary structure on B is a set of base pairings, D = {(i, j)}, where i, j ∈ {1, 2, · · · n}, that satisfies the following conditions:   Under the above mentioned conditions the dynamic programming formulation is as follows. Suppose P (i, j) denotes the maximum number of base pairings in a secondary structure on b i · · · b j . P (i, j) can be recursively defined as : Here, a choice of the value of k is modeled as an OR node and for every such choice, the problem is divided into three sub-problems. This decomposition into sub-problems is modeled as an AND node. We have experimented with the search space of this problem for the set of RNA molecule sequences obtained from the test-cases developed by Szymanski, Barciszewska, Barciszewski, and Erdmann (2005). The details of the test cases are shown in Table 16.
For every test cases, we report the time required in Table 17 for generating 5, 10, and 15 solutions. For the same setting, the space required is reported in Table 18. In Table 17, for each test case, we also report the time required for constructing the explicit AND/OR DAG from the recursive formulation in the 2 nd column, and the time required for constructing the optimal solution time in the 3 rd column. We use a high value of time-out (1800 seconds) in order to gather the running time required by BU. We limit the maximum solutions generated at 15 because for generating higher number of solutions, BU is timed out for most of the test cases. It is worth noting that the result obtained for this domain is very similar to the result obtained for the matrix-chain multiplication problem domain. Both space and time wise ASG and LASG perform similarly and they outperform BU significantly with respect to time as well as space requirement.

Observations
The experimental data shows that the LASG algorithm generally outperforms the ASG algorithm and the existing bottom-up approach in terms of the running time for complete alternating AND/OR trees and AND/OR DAGs. Whereas, for the other problem domains, i.e., the 5-peg Tower of Hanoi problem, the matrix-chain multiplication problem, and the problem of determining secondary structure of RNA sequences, the overall performance of the ASG algorithm is similar to the performance of the LASG algorithm. This behavior can be explained from the average and maximum length statistics of Open list, reported in Table 19 -Table 23, for these above mentioned test domains. Since ASG algorithm checks for the presence of duplicates while expanding a solution, the time required for duplication checking grows rapidly for these test domains. Hence, the overall time required for generating a specific number of solutions also increases rapidly (faster than both BU and LASG) with the increase in the size of the tree/DAG. As a result, BU outperforms ASG with respect to the time requirement for trees and DAGs. However the memory used for generating a specific number of solutions increases moderately (slower than BU) with the increase in the size of the tree/DAG. Therefore with respect to space requirement, ASG outperforms BU for larger trees and DAGs. Between LASG and BU, the time as well as the memory requirement of BU increases faster than that of LASG when the degree of the AND/OR tree or DAG increases. This happens because, for BU, the time taken for merging the sub-solutions at the AND nodes and memory required for storing alternative solutions that are rooted at different nodes increases rapidly with the increase in the degree of that node.
On the contrary, for the other test domains, 5-peg Tower of Hanoi problem, matrix-chain multiplication problem, and the probelm of finding secondary structure of RNA sequences, the average and the maximum size of Open for both ASG and LASG are comparable (Table 21, Table 22 and Table 23). Therefore, for the LASG algorithm, the time saved by avoiding the duplication checking is compensated by the extra overhead of maintaining the solution space tree and the checks required for lazy expansion. Hence the running time as well as the space requirement are almost same for both algorithms for these three above mentioned problem domains.
Moreover, due to the low values of the average and the maximum size of Open, ASG outperforms BU with respect to both time requirement and memory used for these three test domains. For these three domains also, between LASG and BU, the time as well as the memory requirement of BU increases faster than that of LASG when the size of the search space (AND/OR tree or DAG) increases.

Ramifications on Implicitly Specified AND/OR Structures
In this section, we briefly discuss use of our proposed algorithms for generation of alternative solutions in the non-decreasing order of cost for implicit AND/OR search spaces. One possible way is to extend the standard AO * for generating a given number of solutions, say k, as follows. Instead of keeping only one potential solution graph(psg), at any stage k psgs can be computed on the explicitly constructed search space and instead of expanding one node, k nodes, (that is, one node from each psg), can be expanded at once. After expanding the nodes, k psgs are recomputed once again. Since the cost of the nodes are often recomputed after expanding nodes, the swap options associated with any such node have to be updated after every such recomputation.
Another possible approach could be to run AO * until it generates the optimal solution. At this point of time the swap options can be computed on the explicit portion of the graph and swap option with minimum cost can be applied to the optimal solution. Then the resulting psg is again expanded further resulting in the expansion of the explicit graph. The swap options are re-evaluated to incorporate the cost update. Again the next best psg is computed. This process continues till the second best solution is derived. Now among the remaining successor psgs of the first solution and the successor psgs of second solution, the most promising psg is selected and expanded. This process continues till the third solution is found. Then the successor psgs are also added to the already existing pool of candidate psgs. These two broad steps, (a) selecting the next best psg from the pool of candidate psgs, and then (b) keeping on expanding the explicit graph till the next best solution is found, is continued till k solutions are found.    TC1  45  84  41  74  93  176  75  125  135  249  95  143  TC2  50  95  50  95  100  192  94  170  146  266  125  197  TC3  47  90  46  89  90  168  82  142  132  244  115  210  TC4  50  93  49  90  101  194  87  155  152  292  119  197  TC5  47  86  45  74  98  186  87  149  140  246  114  184  TC6  49  93  47  84  105  200  95  168  155  294  127  206  TC7  42  81  42  80  83  157  73  119  121  231  92  138  TC8  46  89  44  84  97  188  86  159  144  277  120  214  TC9  40  77  39  73  80  147  70  119  115  214  93  146  TC10  59  116  59  113  128  251  116  212  189  350  161  280  TC11  55  106  54  105  115  225  110  211  171  317  166  321  TC12  33  64  31  51  67  116  55  98  95  172  78  135  TC13  51  98  51  97  103  193  100  185  149  276  140  239  TC14  41  78  40  73  82  154  69  112  120  231  97  176   Table 23: Average and maximum length of Open while generating 5, 10, and 15 solutions for generating secondary structure of RNA sequences It is important to observe that both methods heavily depend on incorporating the updates in the explicit DAG like adding nodes, increase in the cost, etc., and recomputing the associated swap options along with the signatures that use those swap options. Handling dynamic updates in the DAG efficiently and its use in implicit AND/OR search spaces remains an interesting future direction.

Conclusion
In our work we have presented a top-down algorithm for generating solutions of a given weighted AND/OR structure (DAG) in non-decreasing order of cost. Ordered solutions for AND/OR DAGs are useful for a number of areas including model based programming, developing new variants of AO*, service composition based on user preferences, real life problems having dynamic programming formulation, etc. Our proposed algorithm has two advantages -(a) it works incrementally, i.e., after generating a specific number of solutions, the next solution is generated quickly, (b) if the number of solutions to be generated is known a priori, our algorithm can leverage that to generate solutions faster. Experimental results show the efficacy of our algorithm over the state-of-the-art. This also opens up several interesting research problems and development of applications. every τ i , we constructτ i = σ i 1 ,i 2 , . . . , σ i j −1,i j , where σ i k ,i k +1 = e i k , e i k +1 , δ i k ,i k +1 and [Basis (n = 1) :] Consider the swap list of S opt . The solutions whose default path length is equal to 1 form the Succ(S opt ). Therefore these solutions are present in G. [Inductive Step :] Suppose the solutions whose default path length is less than or equal to n are present in G. We prove that the solutions having default path length equal to n + 1 are also present in G. Consider any solution S m where P d (S m ) = n + 1. LetΣ(S m ) = σ 1 , · · · , σ n , σ n+1 . Consider the solution S ′ m whereΣ(S ′ m ) = σ 1 , · · · , σ n . Since P d (S ′ m ) = n, S ′ m ∈ V, and swap option σ n+1 ∈ L(S ′ m ), there is a directed edge from S ′ m to S m in G s . Hence every solution having a default path length equal to n + 1 is also present in G.
⊓ . . , σ k corresponding to S m . Also consider the solution S ′ m whose sequence of swap options isΣ ′ = σ 1 , . . . , σ k−1 . According to Property 3.2, C(S ′ m ) ≤ C(S m ). Consider the following two cases: a. C(S ′ m ) < C(S m ): Since S m is the first instance of the incorrect scenario, and Algorithm 4 generates the solutions in the non-decreasing order of cost, S ′ m is generated prior to S m . b. C(S ′ m ) = C(S m ): Since Algorithm 4 resolves the tie in the favor of the parent solution, and S m is the first instance of the incorrect scenario -in this case also S ′ m will be generated prior to S m .
The swap option σ k belongs to the swap list of S ′ m . When S ′ m was generated by Algorithm 4, that is, when S ′ m was added to Closed, S ′ m was also expanded and the solutions which can be constructed from S ′ m applying one swap option, were added to the Open list. Since S m was constructed from S ′ m applying one swap option σ k , S m was also added to the Open while exploring the successors of S ′ m . Therefore S m will also be eventually generated by Algorithm 4 -a contradiction.
⊓ ⊔ Lemma A.5 For any alternating AND/OR treeT αβ , Algorithm 4 does not add any solution to Closed (at Line 11 of Algorithm 4) more than once.
Proof: [Lemma A.5] For the purpose of contradiction, let us assume that S m is the first solution that is added to Closed twice. Therefore S m must have been added to Open twice. Consider the following facts.
a. When S m was added to Closed for the first time, the value of lastSolCost was C(S m ), and S m was added to TList.
b. From the description of Algorithm 4 it follows that the contents of TList are deleted only when the value of lastSolCost increases.
c. From Lemma A.3 it follows that Algorithm 4 generates the solutions in non-decreasing order of cost. Hence, when S m was generated for the second time, the value of lastSolCost did not change from C(S m ).
From the above facts it follow that S m was present in TList when S m was added to Open for the second time. Since, while adding a solution to Open, Algorithm 4 checks whether it is present in TList (at Line 16 of Algorithm 4); Algorithm 4 must had done the same while adding S m to Open for the second time. Therefore S m could not be added Open for the second time -a contradiction. ⊓ ⊔ The edges in the paths represent the application of a swap option to a solution. Now p 1 and p 2 start from the same solution and also end at the same solution. Therefore the sets of swap options that are used in these paths are also same. Hence the lengths of those paths are equal, that is, in the context of p 1 and p 2 , n = m. [Case 1 (n = 2) :] Consider the following two paths: It is obvious that σ 1 = σ ′ 2 and σ 2 = σ ′ 1 . Suppose S 2 ≺ t S ′ 2 . Here Algorithm 5 does not apply the swap option σ 1 to S ′ 2 . Therefore p 2 is not generated by Algorithm 5.
[Case 2 (Any other values of n) :] In this case, any path belonging to the set of reconvergent paths, consists of n different swap options, suppose σ 1 , · · · , σ n . Also the start node and the end node of the paths under consideration are S p and S m . Consider the nodes in the paths having length 1 from S p . Clearly there can be n such nodes. Among those nodes, suppose Algorithm 5 adds S p1 to Closed first, and S p1 is constructed from S p by applying swap option σ 1 . According to Algorithm 5, σ 1 will not be applied to any other node that is constructed from S p and is added to Closed after S p1 . Therefore, all those paths starting from S p , whose second node is not S p1 , will not be generated by Algorithm 5. We can use the similar argument on the paths from S p1 to S m of length n − 1 to determine the paths which will not be generated by Algorithm 5. At each stage, a set of paths will not be grown further, and at most one path towards S m will continue to grow. After applying the previous argument n times, at most one path from S p to S m will be constructed. Therefore Algorithm 5 will generate at most one path from S p to S m .
⊓ ⊔ Definition B.s [Connection Relation R c andR c ] We define connection relation, R c , a symmetric order relation for a pair of OR nodes, v q and v r , belonging to an alternating AND/OR treeT αβ as: (v q , v r ) ∈ R c | if inT αβ there exists an AND node v p , from which there exist two paths, (i) p 1 = v p → . . . → v q , and Similarly the connection relation,R c , is defined between two swap options as follows. Consider two swap options σ iq and σ jr , where σ iq = e i , e q , δ iq and σ jr = e j , e r , δ jr . Suppose OR edges e i and e q emanate from v p , and OR edges e j and e r emanate from v t . Now Consider the set of OR nodes, V m = {v 1 , · · · , v k }, where swap option σ j belongs to v j and 1 ≤ j ≤ k. Here the set of swap optionsV m = {σ 1 , · · · , σ k } is mutually connected.
Lemma B.4 Suppose S m is a solution of an alternating AND/OR treeT αβ , P red(S m ) = {S 1 , · · · , S k }, and swap option σ j is used to construct S m from S j where 1 ≤ j ≤ k. The swap options σ 1 , · · · , σ k are mutually connected.
We have to show that For the purpose of proof by contradiction, let us assume (σ i 1 , σ i 2 ) / ∈R c . Also S m is constructed by applying σ i 1 and σ i 2 to S i 1 and S i 2 respectively. Consider the path p 1 in SSDAG ofT αβ which starts from S opt and ends at S m , and along p 1 , S i 1 is the parent of S m . Now along this path, σ i 2 is applied before the application of the swap option σ i 1 . Similarly consider the path p 2 in SSDAG ofT αβ which starts from S opt and ends at S m , and along p 2 , S i 2 is the parent of S m . Along this path, σ i 1 is applied before the application of the swap option σ i 2 .
Suppose σ i 1 and σ i 2 belongs to OR node v 1 and v 2 respectively. Since along path p 1 , σ i 1 is the swap option which is applied last, S m contains node v 1 . Similarly along path p 2 , σ i 2 is the swap option which is applied last. Hence S m contains node v 2 . Therefore, there must be an AND node v r inT αβ , from which there exist paths to node v 1 and v 2 -implies that (σ i 1 , σ i 2 ) ∈R c . We arrive at a contradiction that proves σ 1 , · · · , σ k are mutually connected.
⊓ ⊔ Definition B.u [Subgraph of SSDAG] Consider a solution S p of an alternating AND/OR treeT αβ and mutually connected set V m of OR nodes in S p , where ∀v q ∈ V m , C(S p , v q ) = C opt (v q ) . The subgraph G s sub (S p , V m ) = V sub , E sub of the SSDAG with respect to S p and V m is defined as follows. V sub consists of only those solutions which can be constructed from S p by applying a sequence of swap options belonging to V m , and E sub is the set of edges corresponding to the swap options that belong to V m .
Lemma B.5 The number of total possible distinct solutions at each level d in G s sub (S p , V m ) is n+d−2 n−1 , where |V m | = n.
Proof: [Lemma B.5] Consider the swap options that belong to the nodes in V m . With respect to these swap options, every solution S r in G s sub (S p , V m ) is represented by a sequence of numbers of length n, Seq(S r ), where every number corresponds to a distinct node in V m . The numerical value of a number represent the rank of the swap option that is chosen for a node v q ∈ V m . According to the representation, at each level: i. the sum of numbers in Seq(S r ) of a solution, S r , is equal to the sum of numbers in Seq(S ′ r ) of any other solution, S ′ r , in that same level; ii. the sum of numbers in Seq(S r ) of a solution, S r , is increased by 1 from the sum of numbers in Seq(S ′′ r ) of any solution, S ′′ p , of the previous level.
Hence, at the d th level, there are n slots and d − 1 increments that need to be made to Seq(S r ). This is an instance of the well known combinatorial problem of packing n + d − 1 objects in n slots with the restriction of keeping at least one object per slot. This can be done in n+d−2 n−1 ways. ⊓ ⊔ Theorem B.1 The solution space tree constructed by Algorithm 5 is complete.
Proof: [Theorem B.1] For the purpose of contradiction, suppose S m is the first solution which is not generated by Algorithm 5. Also P red(S m ) = {S p i } and S m can be constructed from S p i by applying σ q i , where 1 ≤ i ≤ k. From Lemma B.4 it follows that the set of swap options {σ q i | 1 ≤ i ≤ k} is mutually connected. Therefore the set of OR nodes V m to which the swap options belong is also mutually connected. Suppose |V m | = n. Consider the solution S q , where V m is mutually connected, and for 1 ≤ i ≤ k, every σ q i belongs to the set of native swap options of S q with respect the swap option that is used to construct S q . Clearly ∀v t ∈ V m , C(S q , v t ) = C opt (v t ) We argue that S q is generated by Algorithm 5 because S m is the first solution which is not generated by Algorithm 5. Consider the subtree T s sub of T s rooted at S q , where only the edges corresponding to swap options that belong to V m are considered. Now we prove that the number of solutions generated by Algorithm 5 at every level of T s sub is equal to the number of solutions at the same level in G s sub (S q , V m ). Consider the solution S q and the set Succ(S q ). Suppose Succ(S q , V m ) is the set of successor solutions that are constructed from S q by applying the swap options belonging to the nodes in V m , and S ′ min is the minimum cost solution in Succ(S q , V m ). According to Algorithm 5 initially Succ(S ′ min ) is partially explored by using the set of native swap options of S ′ min . Any other non native swap option, σ b , that belongs to the nodes in V m , is used to explore Succ(S ′ min ), right after the sibling solution of S ′ min , constructed by applying σ b to S q , is added to Closed. Consider the fact that for solution S q , ∀v t ∈ V m , C(S q , v t ) = C opt (v t ) holds. Therefore all the swap options belonging to V m will also be eventually used to explore the successors of S ′ min . Similarly the second best successor of S q will be able use all but one swap option, σ c , which is used to construct S ′ min . The immediate children of S ′ min in T s sub will consist of all solutions, that can be obtained by the application of one swap option in V m to S ′ min . The native swap list of S ′ min contains the swap option ranking next to σ c . The swap options, that are used to construct the other n − 1 sibling solutions of S ′ min , will be used again during lazy expansion, which accounts for another n − 1 children of S ′ min . Hence there would be n children of S ′ min . Similarly, the second best successor of S q in T s sub will have n − 1 immediate children. The third best successor of S q in T s sub will have n − 2 children and so on. Now the children of these solutions will again have children solutions of their own, increasing the number of solutions at each level of the tree. This way, with each increasing level, the number of solutions present in the level keeps increasing. We prove the following proposition as a part of proving Theorem B.1.
Since Algorithm 5 does not generate duplicate node, and from Proposition B.1 the number of solutions in G s sub (S q , V m ) at any level is equal to the number of solutions in that level of T s sub , at any level the set of solutions in G s sub (S q , V m ) is also generated by Algorithm 5 through T s sub . Therefore, the level, at which S m belongs in G s sub (S q , V m ), will also be generated by Algorithm 5. Therefore S m will also be generated by Algorithm 5 -a contradiction which establishes the truth of the statement of Theorem B.1.
⊓ ⊔ Function Convert takes the root node of AND/OR tree and transforms it to an equivalent alternating AND/OR tree recursively. The overall process of generating alternative solutions of an AND/OR tree is as follows. The AND/OR tree is converted to an alternating AND/OR tree using Convert function, and the solutions are generated using ASG algorithm. The solutions are transformed back using the Revert function. The proof of correctness is presented below.

C.1 Proof of Correctness
Suppose in an AND/OR tree T αβ two nodes, v q and v r , are of similar type (AND/OR) and they are connected by an edge e r . Edges e 1 , · · · , e k emanate from e r . Now fold operation is applied to v q and v r . Let T 1 αβ is the AND/OR tree which is generated by the application of the f old operation.
Lemma C.1 In the context mentioned above, we present the claim of in the following two propositions.
Proposition C.1 The set of solutions of T αβ having node v q can be generated from the set of solutions of T 1 αβ having node v q by applying the unfold operation to v q of the solutions of T αβ .
Proposition C.2 For every solution S 1 m of T 1 αβ that contains node v q , there exists a solution S m of T αβ that can be generated from S 1 m by applying unfold to v q .
Proof: [Proposition C.1] We present the proof for the following cases. Consider any solution of S m of T αβ that contains node v q .
subtree rooted at v i is not modified by the sequence of -(a) the folding of v r to construct T 1 αβ from T αβ , and (b) the unfolding of v q to construct S m from S 1 m .
b. v q and v r are AND nodes: Since v q is an AND node, S 1 m will contain all of the AND edges that emanate from v q . There are two types of AND edges emanating from v q in T 1 αβ and they are (a) Type-1 : the edges from v q that are also present in T αβ from v q , (b) Type-2 : the edges that are added to v q by folding and these edges are from v r in T αβ . Apply the unfold operation to the node v q in S 1 m and generate solution S m . S m will contain Type-1 edges, and another edge e r from v q . In S m , v q and v r are connected by e r and the Type-2 edges are originated from v r . We argue that S m is a valid solution of T αβ since the subtree rooted at nodes pointed by Type-2 edges are not modified by the sequence of -(a) the folding of v r to construct T 1 αβ from T αβ , and (b) the unfolding of v q to construct S m from S 1 m .
Clearly any solution S 1 ′ m of T 1 αβ that does not contain node v q is valid solution for T αβ as well.

Lemma C.2 If function
Convert is applied to the root node of any AND/OR tree T αβ , an alternating AND/OR treeT αβ is generated.
Proof: [Lemma C.2] Function Convert traverses every intermediate node in a depth first manner. Consider any sequence of nodes, v q 1 , v q 2 , · · · , v qn of same type, where v q i is the parent of v q i+1 in T αβ and 1 ≤ i < n. Obviously, the fold operation is applied to v q i+1 before v q i , where 1 ≤ i < n. In other words, the fold operation applied to the sequence of nodes in the reverse order and after folding v q i+1 , all the edges of v q i+1 are modified and moved to v q i , where 1 ≤ i < n. When the function call Convert(v q 2 ) returns, all the edges of v q 2 , · · · , v qn are already moved to v q 1 and the sequence of nodes, v q 1 , v q 2 , · · · , v qn are flattened. Therefore, every sequence of nodes of same type are flattened, when the function call Convert(v R ) returns, where v R is the root of T αβ and an alternating AND/OR treê T αβ is generated.

Lemma C.3 If function
Revert is applied to an alternating AND/OR treeT αβ , the updatelist of every edge inT αβ becomes empty.
Proof: [Lemma C.3] Follows from the description of Revert.
Theorem C.1 For any AND/OR tree T αβ , it is possible to construct an alternating AND/OR treeT αβ using function Convert, where the set of all possible solutions of T αβ is generated in the order of their increasing cost by applying Algorithm 4 toT αβ , and then converting individual solutions using function Revert.
Proof: [Theorem C.1] According to Lemma C.2, after the application of function Convert to T αβ an alternating AND/OR treeT αβ is generated. Consider the intermediate AND/OR trees that are the generated after folding every node in T αβ . Let T 0 αβ , T 1 αβ , · · · , T n αβ are the sequence of AND/OR trees and T 0 αβ = T αβ ,T αβ = T n αβ . Since T i αβ is generated from T i+1 αβ after folding exactly one node in T i αβ , where 0 ≤ i < n, according to Lemma C.1, the solutions of T i αβ can be generated from T i+1 αβ by unfolding the same node. According to Lemma C.3, for any solution ofT αβ , Revert unfolds every node v q in that solution, where v q was folded by Convert while transforming T αβ toT αβ . Therefore the solutions of T αβ can be generated from the solutions ofT αβ .