Avoiding and Escaping Depressions in Real-Time Heuristic Search

Heuristics used for solving hard real-time search problems have regions with depressions. Such regions are bounded areas of the search space in which the heuristic function is inaccurate compared to the actual cost to reach a solution. Early real-time search algorithms, like LRTA*, easily become trapped in those regions since the heuristic values of their states may need to be updated multiple times, which results in costly solutions. State-of-the-art real-time search algorithms, like LSS-LRTA* or LRTA*(k), improve LRTA*s mechanism to update the heuristic, resulting in improved performance. Those algorithms, however, do not guide search towards avoiding depressed regions. This paper presents depression avoidance, a simple real-time search principle to guide search towards avoiding states that have been marked as part of a heuristic depression. We propose two ways in which depression avoidance can be implemented: mark-and-avoid and move-to-border. We implement these strategies on top of LSS-LRTA* and RTAA*, producing 4 new real-time heuristic search algorithms: aLSS-LRTA*, daLSS-LRTA*, aRTAA*, and daRTAA*. When the objective is to find a single solution by running the real-time search algorithm once, we show that daLSS-LRTA* and daRTAA* outperform their predecessors sometimes by one order of magnitude. Of the four new algorithms, daRTAA* produces the best solutions given a fixed deadline on the average time allowed per planning episode. We prove all our algorithms have good theoretical properties: in finite search spaces, they find a solution if one exists, and converge to an optimal after a number of trials.


Introduction
Many real-world applications require agents to act quickly in a possibly unknown environment.Such is the case, for example, of autonomous robots or vehicles moving quickly through initially unknown terrain (Koenig, 2001).It is also the case of virtual agents in games (e.g., Warcraft, Starcraft), in which the time dedicated by the game software to perform tasks such as path-finding for all virtual agents is very limited.Actually, companies impose limits on the order of 1 millisecond to perform these tasks (Bulitko, Björnsson, Sturtevant, & Lawrence, 2011).Therefore, there is usually no time to plan for full trajectories in advance; rather, path-finding has to be carried out in a real-time fashion.
Real-time search (e.g., Korf, 1990;Weiss, 1999;Edelkamp & Schrödl, 2011) is a standard paradigm for solving search problems in which the environment is not fully known in advance c 2012 AI Access Foundation.All rights reserved.
and agents have to act quickly.Instead of running a computationally expensive procedure to generate a conditional plan at the outset, real-time algorithms interleave planning and execution.As such, they usually run a computationally inexpensive lookahead-update-act cycle, in which search is carried out to select the next move (lookahead phase), then learning is carried out (update phase), and finally an action is executed which may involve observing the environment (act phase).Like standard A * search (Hart, Nilsson, & Raphael, 1968), they use a heuristic function to guide action selection.As the environment is unveiled, the algorithm updates its internal belief about the structure of the search space, updating (i.e.learning) the heuristic value for some states.The lookahead-update-act cycle is executed until a solution is found.
Early heuristic real-time algorithms like Learning Real-Time A * (LRTA * ) and Real-Time A * (RTA * ) (Korf, 1990) are amenable for settings in which the environment is initially unknown.These algorithms will perform poorly in the presence of heuristic depressions (Ishida, 1992).Intuitively, a heuristic depression is a bounded region of the search space in which the heuristic is inaccurate with respect to the heuristic values of the states in the border of the region.When an agent controlled by LRTA * or RTA * enters a region of the search space that conforms a heuristic depression it will usually become "trapped".In order to leave the heuristically depressed region, the agent will need to visit and update many states in this region, potentially several times.Furthermore, in many applications, such as games, the behavior of the agent in a depression may look irrational and thus it is undesirable.
State-of-the-art heuristic real-time search algorithms that are suitable for applications with initially unknown environments are capable of escaping heuristic depressions more quickly than LRTA * or RTA * .They do so by performing more lookahead search, more learning, or a combination of both.More search involves selecting an action by looking farther away in the search space.More learning usually involves updating the heuristic of several states in a single iteration.There are many algorithms that use one or a combination of these techniques (e.g., Hernández & Meseguer, 2005;Bulitko & Lee, 2006;Koenig & Likhachev, 2006b;Hernández & Meseguer, 2007;Rayner, Davison, Bulitko, Anderson, & Lu, 2007;Björnsson, Bulitko, & Sturtevant, 2009;Koenig & Sun, 2009).As a result, these algorithms perform better than LRTA * , spending fewer moves trapped in depressions.
Two algorithms representative of the state of the art in real-time search for initially unknown environments are LSS-LRTA * (Koenig & Sun, 2009) and RTAA * (Koenig & Likhachev, 2006a).These algorithms generalize LRTA * by performing more search and more learning in each episode.Both algorithms have been shown to perform very well in practice.However, despite the use of more elaborate techniques, they may still perform poorly in the presence of heuristic depressions.This is because they may sometimes rely on increasing the heuristic value of states inside the depressions as a mechanism to exit them.
In this paper we study techniques that allow us to improve the performance of real-time search algorithms by making them explicitly aware of heuristic depressions, and then by guiding the search in order to avoid and, therefore, escape depressions.Specifically, the contributions of this paper are as follows.
• We provide new empirical evidence that shows that RTAA * outperforms LSS-LRTA * in game map benchmarks in the first trial, which means that whenever there is a single chance to run one of those real-time heuristic search algorithms to solve a search problem, RTAA * finds better solutions than LSS-LRTA * while making the same search effort.Before, Koenig and Likhachev (2006b) had shown similar performance results but in mazes.This is important since LSS-LRTA * , and not RTAA * , is the algorithm that has received more attention by the real-time heuristic search community.In this paper we consider incorporating our techniques to both LSS-LRTA * and RTAA * .
• We propose a definition for cost-sensitive heuristic depressions, which is a more general notion than Ishida's (1992) notion of heuristic depression since it incorporates action costs.We illustrate that our depressions better describe the regions of the search space in which real-time search algorithms get trapped.
• We propose a simple principle to actively guide search towards avoiding cost-sensitive heuristic depressions that we call depression avoidance, together with two strategies to implement depression avoidance which can be incorporated into state-of-the-art real-time heuristic search algorithms: mark-and-avoid and move-to-border.
• We propose four new real-time search algorithms; two based on mark-and-avoid, aLSS-LRTA * , aRTAA * , and two based on move-to-border: daLSS-LRTA * , and daRTAA * .The algorithms are the result of implementing depression avoidance on top of RTAA * and LSS-LRTA * .
• We prove that all our algorithms have desirable properties: heuristic consistency is preserved, they terminate if a solution exists, and they eventually converge to an optimal solution after running a sufficiently large, finite number of trials.
• We carry out an extensive empirical evaluation of our algorithms over deployed game benchmarks and mazes.Our evaluation shows that our algorithms outperform existing algorithms in both game maps and mazes.When little time is allowed for the lookahead phase, two of our algorithms, daLSS-LRTA * and daRTAA * , outperform existing ones by an order of magnitude.
Some of the contributions of this paper have been published in conference papers (Hernández & Baier, 2011d, 2011c).This article includes new material that has not been presented before.In particular: • We describe and evaluate daLSS-LRTA * , an algorithm that is presented in this article for the first time.
• We include full proofs for the termination results (Theorem 6), and a new theoretical result (Theorem 7) on the convergence of all our algorithms.
• We extend previously published empirical results by including maze benchmarks, which had not been previously considered, and by including more game domains and problems.
• Finally, we discuss in detail some scenarios at which our techniques may not perform particularly good.
The rest of the paper is organized as follows.In Section 2 we explain basic concepts of real-time search.We continue presenting LSS-LRTA * and RTAA * , and we extend the results available in the literature by comparing them over game maps.We continue elaborating on the concept of heuristic depression.We then describe our strategies for implementing depression avoidance and the algorithms that result from applying each of them to LSS-LRTA * and RTAA * .We continue with a detailed theoretical and experimental analysis.Then, we present a discussion of our approach and evaluation.We finish with a summary.

Preliminaries
A search problem P is a tuple (S, A, c, s 0 , G), where (S, A) is a digraph that represents the search space.The set S represents the states and the arcs in A represent all available actions.A does not contain elements of form (x, x).In addition, the cost function c : A → R + associates a cost to each of the available actions.Finally, s 0 ∈ S is the start state, and G ⊆ S is a set of goal states.In this paper we assume search spaces are undirected; i.e., whenever (u, v) is in A, then so is (v, u).Furthermore, c(u, v) = c(v, u), for all (u, v) ∈ A. The successors of a state u are defined by Succ(u) = {v | (u, v) ∈ A}.Two states are neighbors if they are successors of each other.
A heuristic function h : S → [0, ∞) associates to each state s an approximation h(s) of the cost of a path from s to a goal state.We denote by h * (s) the cost of an optimal path to reach a solution from s.
A heuristic h is consistent if and only if h(g) = 0 for all g ∈ G and h(s) ≤ c(s, s ) + h(s ) for all states s ∈ Succ(s).If h is consistent and C(s, s ) is the cost of any path between two states s and s , then h(s) ≤ C(s, s ) + h(s ).Furthermore, if h is consistent it is easy to prove that it is also admissible; i.e., h(s) underestimates h * (s).For more details on these definitions, we refer the reader to the book authored by Pearl (1984).
We refer to h(s) as the h-value of s and assume familiarity with the A * algorithm (Hart et al., 1968): g(s) denotes the cost of the path from the start state to s, and f (s) is defined as g(s) + h(s).The f -value and g-value of s refer to f (s) and g(s) respectively.

Real-Time Search
The objective of a real-time search algorithm is to make an agent travel from an initial state to a goal state performing, between moves, an amount of computation bounded by a constant.An example situation is path-finding in a priori unknown grid-like environments.There the agent has sufficient memory to store its current belief about the structure of the search space.In addition, the free-space assumption (Zelinsky, 1992;Koenig, Tovey, & Smirnov, 2003) is taken: the environment is initially assumed as obstacle-free.The agent is capable of a limited form of sensing: only obstacles in the neighbor states can be detected.When obstacles are detected, the agent updates its map accordingly.
Many state-of-the-art real-time heuristic search algorithms can be described by the pseudo-code in Algorithm 1.The algorithm iteratively executes a lookahead-update-act cycle until the goal is reached.The lookahead phase (Line 4-6) determines the next state to move to, the update phase (Line 7) updates the heuristic, and the act phase (Line 8) moves the agent to its next position.The lookahead-update part of the cycle (Lines 4-7) is referred to as the planning episode throughout the paper.

Algorithm 1: A generic real-time heuristic search algorithm
Input: A search problem P , and a heuristic function h.Side Effect: The agent is moved from the initial state to a goal state if a trajectory exists move the agent from s current to s next through the path identified by LookAhead.Stop if an action cost along the path is updated.The generic algorithm has three local variables: s current stores the current position of the agent, c(s, s ) contains the cost of moving from state s to a successor s , and h is such that h(s) contains the heuristic value for s.All three variables may change over time.In path-finding tasks, when the environment is initially unknown, the initial value of c is such that no obstacles are assumed; i.e., c(s, s ) < ∞ for any two neighbor states s, s .The initial value of h(s), for every s, is given as a parameter.
The generic algorithm receives as input a search problem P , and starts off by initializing some useful variables (Lines 1-2).In h 0 it records the initial value of h, for all states in P , and in s current it stores the initial position of the agent, s 0 .We assume the cost of an arc cannot decrease.In particular, arc costs increase to infinity when an obstacle is discovered.
In the lookahead phase (Lines 4-6), the algorithm determines where to proceed next.The Lookahead() procedure in Line 4 implements a bounded search procedure that expands states from the current state s current .The set of states generated by this call is referred to as local search space.Different choices can be made to implement this procedure.Real-Time A * (RTA * ) and Learning Real-Time A * (LRTA * )-two early algorithms proposed by Korf (1990) -and other modern real-time search algorithms run a search from the current state up to a fixed depth (e.g., Bulitko & Lee, 2006).Another common option is to run a bounded A * search; such a choice is taken by Local Search Space LRTA * (LSS-LRTA * ) (Koenig & Sun, 2009), and Real-Time Adaptive A * (RTAA * ) (Koenig & Likhachev, 2006b).Algorithm 2 shows the pseudo-code for bounded A * .Note that at most k states are expanded, where k is a parameter of the algorithm usually referred to as the lookahead parameter.The pseudo code of the generic real-time search algorithm assumes that the call to Lookahead() stores the frontier of the local search space in Open, and, moreover, that if a goal state is found during search, such a state is not removed from the frontier (in the bounded A * pseudo-code this is guaranteed by the condition in Line 7).
In the last step of the lookahead phase (Line 6, Algorithm 1), the variable containing the next state to move to, s next , is assigned.Here, most algorithms select the state in the search frontier that is estimated to be closest to a goal state.When A * lookahead is used, such a state usually corresponds to a state with minimum f -value in Open.Thus A * -based lookahead algorithms use Algorithm 3 to implement the Extract-Best-State() function.
In the update phase (Line 7, Algorithm 1), the heuristic of some states in the search space is updated to a value that is a better estimate of the true cost to reach a solution, while staying consistent.After exploring states in the vicinity of s current , the algorithm gains information about the heuristic value of a number of states.Using this information, the h-value of s current -and potentially that of other states in the search space-can be updated in such a way that they reflect a better estimation of the cost to reach a solution.Since after the update the heuristic of some states are updated to a value closer to the true cost, this phase is also referred to as the learning phase.
The literature describes several ways in which one can implement the update of the heuristic, e.g., mini-min (e.g., Korf, 1990), max of mins (Bulitko, 2004), and heuristic bounded propagation (Hernández & Meseguer, 2005).The learning rules that are most relevant to this paper, however, are those implemented by LSS-LRTA * and RTAA * .They are described in detail in the following subsections.
Finally, after learning, the agent attempts to move to the state selected by the Extract-Best-State() function, s next .In most implementations, the path to the selected state has been computed already by the Lookahead() procedure (in the case of Algorithm 2, the path is reconstructed using the back pointer that is set in Line 13).When the environment is known in advance, the agent can always move to the destination.However, when the environment is not known in advance, this process can fail (in path-finding, this can occur due to the discovery of an obstacle).If such an obstacle is found, we assume the agent stops moving as soon as it has detected an obstacle.In such cases, the algorithm will update its memory regarding the environment, which typically involves updating the cost function.In our pseudo-code, this is reflected in Line 10.
3. LSS-LRTA * and RTAA * Now we describe LSS-LRTA * and RTAA * , the two state-of-the-art real-time heuristic search algorithms that are most relevant to this paper.We make two small contributions to the understanding of these two algorithms.First, we do an experimental comparison of them over benchmarks that had not been considered before.Second, we prove two theoretical results that aim at understanding the differences between their update mechanisms (Propositions 1 and 2).To our knowledge, none of these results appear in the literature.

LSS-LRTA *
Local search space LRTA * (LSS-LRTA * ) was first introduced by Koenig (2004), and later presented in detail by Koenig and Sun (2009).It is an instance of Algorithm 1.Its lookahead procedure is a bounded A * search (Algorithm 2).The next state to move to corresponds to a state in Open with the lowest f -value; i.e., it uses Algorithm 3 to implement Extract-Best-State().
LSS-LRTA * updates the values of each state s in the local search space in such a way that h(s) is assigned the maximum possible value that guarantees consistency with the states in Open.It does so by implementing the Update() procedure as a modified Dijkstra's algorithm (Algorithm 4).Since the value of h is raised to the maximum, the update mechanism of LSS-LRTA * makes h as informed as it can get given the current knowledge about the search space, while maintaining consistency.
Algorithm 4: LSS-LRTA * 's Modified Dijkstra's Procedure.We assume Open list is a queue ordered by h-value.for each s such that s ∈ Succ(s ) do 3.2 RTAA * Real-Time Adaptive A * (RTAA * ) was proposed by Koenig and Likhachev (2006b).It is an instance of Algorithm 1.Its lookahead phase is identical to that of LSS-LRTA * : a bounded A * followed by selecting a state with the lowest f -value in Open as the next state to move to.However it uses a simpler learning mechanism based on the update rule of the incremental A * search algorithm Adaptive A * (Koenig & Likhachev, 2006a).Thus, it updates the heuristic value of states in the interior of the local search space (i.e., those stored in A * 's variable Closed) using the f -value of the best state in Open.The procedure is shown in Algorithm 5. RTAA * 's update procedure is considerably faster in practice than that of LSS-LRTA * .Obtaining the lowest f -value of a state in Open can be done in constant time if A * is implemented with binary heaps.After that, the algorithm simply iterates through the states in Closed.The worst-case performance is then O(|Closed|).On the other hand, LSS-LRTA * 's update procedure first needs to convert Open into a priority queue ordered by h and then may, in the worst case, need to extract |Open| + |Closed| elements from a binary heap.In addition, it expands each node that is ever extracted from the priority queue.The time to complete these operations, in the worst case is where N = |Open| + |Closed|, T exp is the time taken per expansion, and T b is a constant factor associated to extraction from the binary heap.The worst-case asymptotic complexity of extraction is thus O(N log N ).However, since we usually deal with a small N it may be the case that the term T exp • N dominates the expression for time.
We will prove that the heuristic values that RTAA * learns may be less accurate than those of LSS-LRTA * .To state this formally, we introduce some notation.Let h n , for n > 0, denote the value of the h variable at the start of iteration n of the main algorithm, or, equivalently, right after the update phase of iteration n − 1.We will also denote the heuristic function given as input as h 0 .Let k n (s, s ) denote the cost of an optimal path from s to s that traverses states only in Closed before ending in s .Proposition 1 Let s be a state in Closed right after the call to A * has returned in the n-th iteration of LSS-LRTA * .Then, Proof: We will show that the value h(s) computed by the modified Dijkstra algorithm for each state s corresponds to the minimum cost of reaching node s from a certain state in a particular graph G.The modified Dijkstra procedure can be seen as a run of the standard Dijkstra algorithm (e.g., Cormen, Leiserson, Rivest, & Stein, 2001) on such a graph.First we observe that our procedure differs from the standard Dijkstra algorithm in that a non-singleton set of states, namely those in Open, are initialized with a finite value for h.In the standard Dijkstra algorithm, on the other hand, only the source node is initialized with a cumulative cost of 0 whereas the remaining nodes are initialized to ∞.With the facts above in mind, it is straightforward to see that a run of the modified Dijkstra can be interpreted as a run of the standard Dijkstra algorithm from node s start of a directed graph G that is such that: • Its nodes are exactly those in Open ∪ Closed plus a distinguished node s start .
• It contains an arc (u, v) with cost c if there is an arc (v, u) with cost c in the search graph of P such that one of v or u is not in Open.
• It contains an arc of the form (s start , s) with cost h(s) for each s in Open.
• It contains no other arcs.
After running the Dijkstra algorithm from s start over G, we obtain, for each node s in G the cost of an optimal path from s start to s.If we interpret such a cost as h(s), for each s, Equation 1 holds, which finishes the proof.
For RTAA * we can prove a sightly different result.
Proposition 2 Right after the call to A * returns in the n-th iteration of RTAA * , let s * be the state with lowest f -value in Open, and let s be a state in Closed.Then, (2) However, if h n is consistent and s is in the path found by A * from s current to s * , then Proof: For (2), we use the fact that if the heuristic is consistent, it remains consistent after each RTAA * iteration (a fact proven by Koenig & Likhachev, 2006a), to write the inequality h n+1 (s) ≤ min s b ∈Open k n (s, s b ) + h n+1 (s b ).Now note that for every state s b in Open it holds that h n (s) = h n+1 (s), since the heuristic values of states in Open are not updated.Substituting h n+1 (s) in the inequality, we obtain the required result.For (3), we use the a fact proven by Hart et al. (1968) about A * : if consistent heuristics are used, g(s) contains the cost of the cheapest path from the start state to s right after s is extracted from Open (Line 8 in Algorithm 2).
Because A * is run with a consistent heuristic, for any state s along the (optimal) path found by A * from s current to s * , g(s ) = k n (s current , s ), and (4) RTAA * 's update rule states that: Substituting with (4) and ( 5) in (6), we obtain Indeed, if there were an s − in Open such that k n (s , s * ) + h(s * ) > k n (s , s − ) + h n (s − ), then by adding g(s ) to both sides of the inequality, we would have that f (s * ) > f (s − ), which contradicts the fact that s * is the state with lowest f -value in Open.We conclude henceforth that h n+1 (s ) = min s b ∈Open k n (s , s b ) + h n (s b ).This finishes the proof.
Proposition 2 implies that, when using consistent heuristics, RTAA * 's update may yield less informed h-values than those of LSS-LRTA * .However, at least for some of the states in the local search space, the final h-values are equal to those of LSS-LRTA * , and hence they are as informed as they can be given the current knowledge about the search space.Koenig and Likhachev (2006a) show that for a fixed value of the lookahead parameter, the quality of the solutions obtained by LSS-LRTA * are better on average than those obtained by RTAA * in path-finding tasks over mazes.This is due to the fact that LSS-LRTA * 's heuristic is more informed over time than that of RTAA * .However, they also showed that given a fixed time deadline per planning episode, RTAA * yields better solutions than LSS-LRTA * .This is essentially due to the fact that RTAA * 's update mechanism is faster: for a fixed deadline, a higher lookahead parameter can be used with RTAA * than with LSS-LRTA * .
We extend Koenig and Likhachev's experimental analysis by running a comparison of the two algorithms on game maps.Table 1 shows average results for LSS-LRTA * and RTAA * ran on 12 different game maps.For each map, we generated 500 random test cases.Observe, for example, that if a deadline of 0.0364 milliseconds is imposed per planning episode we can choose to run RTAA * with a lookahead k = 128, whereas we can choose to run LSS-LRTA * only with lookahead k = 64.With those parameters, RTAA * obtains a solution about 36% cheaper than LSS-LRTA * does.Figure 1 shows average solution cost versus time per episode.The slopes of the curves suggest that the rate at which RTAA * improves solutions is better than that of LSS-LRTA * , as more time per episode is given.In conclusion RTAA * seems superior to LSS-LRTA * when time is actually important.We thus confirm for a wider range of tasks that, when time per episode matters, RTAA * is better than LSS-LRTA * .These findings are important because mazes (for which previous evaluations existed) are problems with a very particular structure, and results over them do not necessarily generalize to other types of problems.
Although we conclude that RTAA * is an algorithm superior to LSS-LRTA * when it comes to finding a good solution quickly, it is interesting to note that recent research on real-time heuristic search is focused mainly on extending or using LSS-LRTA * (see e.g., Bulitko, Björnsson, & Lawrence, 2010;Bond, Widger, Ruml, & Sun, 2010;Hernández & Baier, 2011d;Sturtevant & Bulitko, 2011), while RTAA * is rarely considered.Since LSS-LRTA * seems to be an algorithm under active study by the community, in this paper we apply our techniques to both algorithms.

Heuristic Depressions
In real-time search problems heuristics usually contain depressions.The identification of depressions is central to our algorithm.Intuitively, a heuristic depression is a bounded region of the search space containing states whose heuristic value is too low with respect to the heuristic values of states in the border of the depression.Depressions exist naturally in heuristics used along with real-time heuristic search algorithms.As we have seen above, real-time heuristic algorithms build solutions incrementally, updating the heuristic values associated to certain states as more information is gathered from the environment.Ishida (1992) gave a constructive definition for heuristic depressions.The construction starts with a node s such that its heuristic value is equal to or less than those of the  surrounding states.The region is then extended by adding a state of its border if all states in the resulting region have a heuristic value lower or equal than those of the states in the border.As a result, the heuristic depression D is a maximal connected component of states such that all states in the boundary of D have a heuristic value that is greater than or equal to the heuristic value of any state in D.
It is known that algorithms like LRTA * behave poorly in the presence of heuristic depressions (Ishida, 1992).To see this, assume that LRTA * is run with lookahead depth equal to 1, such that it only expands the current state, leaving its immediate successors in the search frontier.Assume further that it visits a state in a depression and that the solution node lies outside the depression.To exit the depressed region the agent must follow a path in the interior of the depressed region, say, s 1 . . .s n , finally choosing a state in the border of the region, say s e .While visiting s n , the agent chooses s e as the next move, which means that s e minimizes the estimated cost to reach a solution among all the neighbors of s n .In problems with uniform action costs, this can only happen if h(s e ) is lower or equal than the heuristic value of all other neighbors of s n .This fact actually means that the depression in that region of the search space no longer exists, which can only happen if the heuristic values of states in the originally depressed region have been updated (increased).For LRTA * , the update process may be quite costly: in the worst case all states in the depression may need to be updated and each state may need to be updated several times.
Ishida's definition is, nonetheless, restrictive.In fact, it does not take into account the costs of the actions needed to move from the interior of the depression to the exterior.A closed region of states may have unrealistically low heuristic values even though the heuristic values in the interior are greater than the ones in the border.We propose a more intuitive notion of depression when costs are taken into account.The formal definition follows.
Definition 1 (Cost-sensitive heuristic depression) A connected component of states D is a cost-sensitive heuristic depression of a heuristic h iff for any state s ∈ D and every state s ∈ D that is a neighbor of a state in D, h(s) < k(s, s ) + h(s ), where k(s, s ) denotes the cost of the cheapest path that starts in s, traverses states only in D, and ends in s .
Cost-sensitive heuristic depressions better reflect the regions in which an agent controlled by algorithms such as LRTA * get trapped.To illustrate this, consider the two 4-connected grid-world problems of Figure 2. Gray cells conform an Ishida depression.The union of yellow and gray cells conform a cost-sensitive heuristic depression.Suppose the agent's initial position is the lower-right corner of the Ishida depression (C4 in Figure 2(a), and C7 in Figure 2(b)).Assume further that ties are broken such that the priorities, given from higher to lower, are: down, left, up, and right.For such an initial state, both in situation (a) and situation (b), the agent controlled by LRTA * will visit every state in the cost-sensitive heuristic depression before reaching the goal.Indeed, cells in the costsensitive depression that are not adjacent to an obstacle are visited exactly 3 times, while cells adjacent to an obstacle are visited 2 times, before the agent escapes the depression, and thus the performance of LRTA * can be described as a linear function on the size of the cost-sensitive depression.
It is interesting to note that for problems like the ones shown in Figure 2, the size of the Ishida depression remains the same while the width of the grid varies.Thus, the size of the Ishida depression is not correlated with the performance of LRTA * .On the other hand, the size of the cost-sensitive heuristic depression is a predictor of the cost of the solution

Depression Avoidance
A major issue at solving real-time search problems is the presence of heuristic depressions.
State-of-the-art algorithms are able to deal with this problem essentially by doing extensive learning and/or extensive lookahead.By doing more lookahead, chances are that a state outside of a depression is eventually selected to move to.On the other hand, by learning the heuristic values of several states at a time, fewer movements might be needed in order to raise the heuristic values of states in the interior of a depression high enough as to make it disappear.As such, LSS-LRTA * , run with a high value for the lookahead parameter exits the depressions more quickly than LRTA * run with search depth equal to 1 for two reasons: (1) because the heuristic function increases for states in D more quickly and (2) because with a high value for the lookahead parameter it is sometimes possible to escape the depression in one step.
Besides the already discussed LSS-LRTA * and RTAA * , there are many algorithms described in the literature capable of doing extensive lookahead and learning.The lookahead ability of LRTS (Bulitko & Lee, 2006), and TBA * (Björnsson et al., 2009) is parametrized.By using algorithms such as LRTA * (k) (Hernández & Meseguer, 2005), PLRTA * (Rayner et al., 2007) and LRTA * LS (k) (Hernández & Meseguer, 2007) one can increase the number of states updated based on a parameter.None of these algorithms however are aware of depressions; their design simply allows to escape them because of their ability to do lookahead, learning, or a combination of both.Later, in Section 9, we give a more detailed overview of other related work.
To improve search performance our algorithms avoid depressions, a principle we call depression avoidance.Depression avoidance is a simple principle that dictates that search should be guided away from states identified as being in a heuristic depression.There are many ways in which one could conceive the implementation of this principle in a real-time heuristic search algorithm.Below we present two alternative realizations of the principle within the state-of-the-art RTAA * and LSS-LRTA * algorithms.As a result, we propose four new real-time search algorithms, each of which has good theoretical properties.

Depression Avoidance via Mark-and-Avoid
This subsection presents a first possible realization of depression avoidance that we call mark-and-avoid.With this strategy, we extend the update phase to mark states that we can prove belong to a heuristic depression.We then modify the selection of the best state (i.e., the Extract-Best-State() function) to select states that are not marked; i.e., states that are not yet proven to be part of a depression.
aLSS-LRTA * is version of LSS-LRTA * that avoids depressions via mark-and-avoid.It is obtained by implementing the Update() function using Algorithm 6 and by implementing the Extract-Best() function with Algorithm 7.There are two differences between its update procedure and LSS-LRTA * 's.The first is the initialization of the updated flag in Lines 2-3.The second is Line 7, which sets s.updated to true if the heuristic value for h changes as a result of the update process.In the following section, we formally prove that this means that s was inside a cost-sensitive heuristic depression (Theorem 5).for each s such that s ∈ Succ(s ) do To select the next state s next , aLSS-LRTA * chooses the state with lowest f -value from Open that has not been marked as in a depression.If such a state does not exist, the algorithm selects the state with lowest f -value from Open, just like LSS-LRTA * would do.Depending on the implementation, the worst-case complexity of this new selection mechanism may be different from that of Algorithm 3. Indeed, if the Open list is implemented with a binary heap (as it is our case), the worst-case complexity of Algorithm 7 is O(N log N ) where N is the size of Open.This is because the heap is ordered by f -value.On the other hand the worst-case complexity of Algorithm 3 using binary heaps is O(1).In our experimental results we do not observe, however, a significant degradation in performance due to this factor.Example Figure 3 shows an example that illustrates the difference between LSS-LRTA * and aLSS-LRTA * with the lookahead parameter equal to two.After 4 search episodes, we observe that aLSS-LRTA * avoids the depression, leading the agent to a position that is 2 steps closer to the goal than LSS-LRTA * .
Algorithm 8: aRTAA * 's Update Procedure With aLSS-LRTA * as a reference, it is straightforward to implement the mark-and-avoid strategy into RTAA * .The update phase of the resulting algorithm, aRTAA * , is just like RTAA * 's but is extended to mark states in a depression (Algorithm 8).The selection of the best state to move to is done in the same way as aLSS-LRTA * , i.e., with Algorithm 7. As a result aRTAA * is a version of RTAA * that aims at avoiding depressions using mark-and-avoid.

Depression Avoidance via Move-to-Border
Move-to-border is a more finely grained implementation of depression avoidance.To illustrate the differences, consider that, after lookahead, there is no state s in the frontier of the local search space such that s.updated is false.Intuitively, such is a situation in which the agent is "trapped" in a heuristic depression.In this case, aLSS-LRTA * behaves exactly as LRTA * does since all states in the search frontier are marked.Nevertheless, in these cases, we would like the movement of the agent to still be guided away from the depression.
In situations in which all states in the frontier of the local search space are already proven as members of a depression, the move-to-border strategy attempts to move to a state that seems closer to the border of a depression.As a next state, this strategy chooses the state with best f -value among the states whose heuristic has changed the least.The intuition behind this behavior is as follows: assume ∆(s) is the difference between the actual cost to reach a solution from a state s and the initial heuristic value of state s.Then, if s 1 is a state close to the border of a depression D and s 2 is a state farther away from the border and "deep" in the interior of D, then ∆(s 2 ) ≥ ∆(s 1 ), because the heuristic of s 2 is lookahead equal to 1 in a 4-connected grid, analogous to our previous example, in which the objective is the cell E2.In iterations 1 to 14 both algorithms execute in the same way.Numbers in cells correspond to initial h-value (lower-left), current h-value (lower-right), and difference between those two amounts (upper-right).
Triangles ( ) denote states whose heuristic value has been updated.more imprecise than that of s 1 .At execution time, h is an estimate of the actual cost to reach a solution.daLSS-LRTA * and daRTAA * differ, respectively, from LSS-LRTA * and RTAA * in that the selection of the next state to move to (i.e., function Extract-Best()) is implemented via Algorithm 9. Note that the worst case complexity of this algorithm is O(N log N ), where N is the size of Open if binary heaps are used.
Algorithm 9: Selection of the next state by daRTAA * and daLSS-LRTA * .
Figure 4 illustrates the differences between aLSS-LRTA * and daLSS-LRTA * .Both algorithms execute in the same way if, after the lookahead phase, there is a state in Open whose heuristic value has not been updated.However, when this is not the case (i.e., when the algorithm is "trapped" in a depression), daLSS-LRTA * will move to what seems to be closer to the border of the depression.In the example of Figure 4, at iteration 15, the algorithm chooses B4 instead of C3 since B4 is the state for which the h-value has changed the least.After iteration 18, daLSS-LRTA * will move to cells in which less learning has been carried out and thus will exit the depression more quickly.
All the new algorithms presented in this section are closely related.Table 2 shows a schematic view of the different components of each algorithm, and the complexity of the involved algorithms.

Theoretical Analysis
In this section we analyze the theoretical properties of the algorithms that we propose.We prove that all of our algorithms also satisfy desirable properties that hold for their ancestors.We start off by presenting theoretical results that can be proven using existing proofs available in the literature; among them, we will show that the consistency of the heuristic is maintained by all our algorithms during run time.We continue with results that need different proofs; in particular, termination and convergence to an optimal solution.
As before, we use h n to refer to the value of variable h at the start of iteration n (h 0 , thus, denotes the heuristic function given as a parameter to the algorithm).Similarly, c n (s, s ) is the cost of the arc between s and s .Finally, k n (s, s ) denotes the cost of an optimal path between s and s that traverses only nodes in Closed before ending in s with respect to cost function c n .
We first establish that if h is initially consistent, then h is non-decreasing over time.This is an important property since it means that the heuristic becomes more accurate over time.Theorem 1 If h n is consistent with respect to cost function c n , then h n+1 (s) ≥ h n (s) for any n along an execution of aLSS-LRTA * or daLSS-LRTA * .
Proof: Assume the contrary, i.e., that there is a state s such that h n (s) > h n+1 (s).State s must be in Closed, since those are the only states whose h-value may be updated.As such, by Proposition 1, we have that h n+1 (s) = k n (s, s b ) + h n (s b ), for some state s b in Open.
However, since h n (s) > h n+1 (s), we conclude that: which contradicts the fact that h n is consistent.We thus conclude that the h-value of s cannot decrease.
Theorem 2 If h n is consistent with respect to cost function c n , then h n+1 (s) ≥ h n (s) for any n along an execution of aRTAA * or daRTAA * .
Proof: Assume the contrary, i.e., that there is a state s such that h n (s) > h n+1 (s).State s must be in Closed, since those are the only states whose h-value may be updated.The update rule will set the value of h n+1 (s) to f (s ) − g(s) for some s ∈ Open, i.e., But since h n (s) > h n+1 (s), we have that: Reordering terms, we obtain that: which means that the f -value of s is greater than the f -value of s .It is known however that A * , run with a consistent heuristic, will expand nodes with non-decreasing f -values.We conclude, thus, that s must have been expanded before s.Since s is in Open, then s cannot be in Closed, which contradicts our initial assumption.We thus conclude that the h-value of s cannot decrease.
Theorem 3 If h n is consistent with respect to cost function c n , then h n+1 is consistent with respect to cost function c n+1 along an execution aLSS-LRTA * or daLSS-LRTA * .
Proof: Since the update procedure used by aLSS-LRTA * , daLSS-LRTA * and LSS-LRTA * update variable h in exactly the same way, the proof by Koenig and Sun (2009) can be reused here.However, we provide a rather simpler proof in Section B.1.
Theorem 4 If h n is consistent with respect to cost function c n , then h n+1 is consistent with respect to cost function c n+1 along an execution aRTAA * or daRTAA * .
Proof: Since the update procedure used by aRTAA * , daRTAA * and RTAA * update variable h in exactly the same way, we can re-use the proof of Theorem 1 by Koenig and Likhachev (2006b) to establish this result.We provide however a complete proof in Section B.2 The objective of the mark-and-avoid strategy is to stay away from depressions.The following theorems establish that, indeed, when a state is marked by the aLSS-LRTA * or aRTAA * then such a state is in a heuristic depression of the current heuristic.
Theorem 5 Let s be a state such that s.updated switches from false to true between iterations n and n + 1 in an execution of aLSS-LRTA * or aRTAA * for which h was initially consistent.Then s is in a cost-sensitive heuristic depression of h n .
Proof: We first prove the result for the case of aLSS-LRTA * .The proof for aRTAA * is very similar and can be found in Section B.3.
Let D be the maximal connected component of states connected to s such that: 1.All states in D are in Closed after the call to A * in iteration n, and Let s be a state in the boundary of D. We first show that h n (s ) = h n+1 (s ).By definition s is either in Closed or Open.If s ∈ Closed then, since s ∈ D, it must be the case that s does not satisfy condition 2 of the definition of D, and hence h n+1 (s ) ≤ h n (s ).However, since the heuristic is non-decreasing (Theorems 2 and 1), it must be that h n (s ) = h n+1 (s ).On the other hand, if s is in Open, its heuristic value is not changed and thus also h n (s ) = h n+1 (s ).We have established, hence, that h n (s ) = h n+1 (s ).Now we are ready to establish our result: that D is a cost-sensitive heuristic depression of h n .
Let s d be a state in D. We distinguish two cases.
• Case 1: s ∈ Closed.Then, by Proposition 1, for some s b ∈ Open.On the other hand, since the heuristic value has increased for ) in the previous inequality we have: We now substitute the right-hand side of (8) using ( 7), and we obtain • Case 2: s ∈ Open.Because of Proposition 1 we have Moreover, by definition of D, we have h n+1 (s d ) > h n (s d ).Combining these two inequalities, we obtain: In both cases, we proved h n (s d ) < k n (s d , s ) + h n (s ), for any s d in D and any s in the boundary of D. We conclude D is a cost-sensitive heuristic depression of h n , which finishes the proof.
Now we turn our attention to termination.We will prove that if a solution exists, then it will be found by any of our algorithms.To prove such a result, we need two intermediate lemmas.The first establishes that when the algorithm moves to the best state in Open, then the h-value of such a state has not changed more than the h-value of the current state.Formally, Lemma 1 Let s be the state with smallest f -value in Open after the lookahead phase of any of aLSS-LRTA * , daLSS-LRTA * , aRTAA * , or daRTAA * , when initialized with a consistent heuristic h.Then, Proof: Indeed, by Propositions 1 or 2: Let π be an optimal path found by A * connecting s current and s .Let K π 0 denote the cost of this path with respect to cost function c 0 .Given that the heuristic h 0 is consistent with respect to the graph with cost function c 0 , we have that h 0 (s current ) ≤ K π 0 + h 0 (s ) which can be re-written as: Adding ( 10) and ( 9), we obtain: Now, because c n can only increase, the cost of π at iteration n, k n (s current , s ), is strictly greater than the cost of π at iteration 0, K π 0 .In other words, the amount k n (s current , s )−K π 0 is positive and can be removed from the right-hand side of (11) to produce: which is the desired result.
The second intermediate result to prove termination is the following lemma.
Lemma 2 Let n be an iteration of any of aLSS-LRTA * , daLSS-LRTA * , aRTAA * , or daRTAA * , when initialized with a consistent heuristic h.If s next is not set equal to the state s with least f -value in Open, then: Proof: Indeed, if aRTAA * or aLSS-LRTA * are run, this means that s next is such that s next is not marked as updated, which means that h n (s next ) = h 0 (s next ), or equivalently, that h n (s next ) − h 0 (s next ) = 0.Moreover, the best state in Open, s , was not chosen and hence it must be that s .updated= true, which means that h(s ) − h 0 (s ) > 0. We obtain then that h n (s ) − h 0 (s ) > h n (s next ) − h 0 (s next ).
The case of daRTAA * or daLSS-LRTA * is direct by the condition in Line 5 of Algorithm 9. Hence, it is also true that h n (s ) − h 0 (s ) > h n (s next ) − h 0 (s next ).Now we are ready to prove the main termination result.
Theorem 6 Let P be an undirected finite real-time search problem such that a solution exists.Let h be a consistent heuristic for P .Then, any of aLSS-LRTA * , daLSS-LRTA * , aRTAA * , or daRTAA * , used with h, will find a solution for P .
Proof: Let us assume the contrary.There are two cases under which the algorithms do not return a solution: (a) they return "no solution" in Line 5 (Algorithm 1), and (b) the agent traverses an infinite path that never hits a solution node.
For (a) assume any of the algorithms is in state s before the call to A * .When it reaches Line 5 (Algorithm 1), the open list is empty, which means the agent has exhausted the search space of states reachable from s without finding a solution; this is a contradiction with the fact that a solution node is reachable from s and the fact that the search problem is undirected.
For (b) assume that the agent follows an infinite path π.Observe that in such an infinite execution, after some iteration-say, R-the value of variable c does not increase anymore.This is because all states around states in π have been observed in the past.As a consequence, in any iteration after R the agent traverses the complete path identified by the A * lookahead procedure (Line 8 in Algorithm 1).
A second important observation is that, after iteration R, the value of h for the states in π is finite and cannot increase anymore.Indeed, by Theorems 4 and 3, h remains consistent and hence admissible, which means that h(s) is bounded by the actual cost to reach a solution from s, for any s in π.Moreover, since c does not change anymore, the call to the update function will not change the value of h(s), for every s in π.
Now we are ready to finish the proof.Consider the algorithm executes past iteration R. Since the path is infinite and the state space is finite, in some iteration after R the algorithm decides to go back to a previously visited state.As such, we are going to assume the agent visits state t 0 and selects to move trough states t 1 t 2 • • • t r−1 t r t 0 • • • .Since the heuristic does not change anymore, we simply denote it by h, regardless of the iteration number.We distinguish two cases.

Case 1
The agent always decides to move to the best state in Open, s , and hencedepending on the algorithm that is used-by Proposition 1 or 2, h(s) = k(s, s )+h(s ), which implies h(s) > h(s ), since action costs are positive.This implies that: which is a contradiction; it cannot be the case that h(t 0 ) > h(t 0 ).
Case 2 At least once, the agent does not move to the best state in Open.Without loss of generality, we assume this happens only once, for a state t i for some i < r.Let t * be a state with the smallest f -value in Open after the lookahead is carried out from t i .
By Lemma 1, we can write the following inequalities.
Let I be a set containing these inequalities.Now since when in state t i the algorithm decides to move to t i+1 instead of t * , we use Lemma 2 to write: The inequalities in I together with (12) entail h(t 0 ) − h 0 (t 0 ) > h(t 0 ) − h 0 (t 0 ), which is a contradiction.
In both cases we derive contradictions and hence we conclude the algorithm cannot enter an infinite loop and thus finds a solution.
We now turn our attention to convergence.The literature often analyzes the properties of real-time heuristic search when they are run on a sequence of trials (e.g., Shimbo & Ishida, 2003).Each trial is characterized by running the algorithm from the start state until the problem is solved.The heuristic function h resulting from trial n is used to feed the algorithm's h variable in trial n + 1.
Before stating the convergence theorem we prove a result related to how h increases between successive iterations or trials.Indeed, each iteration of our search algorithms potentially increases h, making it more informed.The following result implies that this improvement cannot be infinitesimal.
Lemma 3 Let P be a finite undirected search problem, and let Sol be a set of states in P from which a solution can be reached.Let n be an iteration of any of aLSS-LRTA * , daLSS-LRTA * , aRTAA * , or daRTAA * .Then h n (s) can only take on a finite number of values, for every s in P .
Proof: Given Proposition 1, along an execution of any of the algorithms of the LSS-LRTA * family, it is simple to prove by induction on n that: for any n, where K is sum of the costs of 0 or more arcs in P under cost function c n .
On the other hand, given the update rule of any of the algorithms of the RTAA * family (e.g., Line 6 in Algorithm 8), for any n, where K and K correspond to the sum of the costs of some arcs in P under cost function c n .
Since in finite problems there is a finite number of arcs, the quantities referred to by K and K can only take on a finite number of values.This implies that h n (s), for any s in P , can only take on a finite number of values, which concludes the proof.
Below we show that if h converges after a sequence of trials, the solution found with h is optimal.
Theorem 7 Let P be an undirected finite real-time search problem such that a solution exists.Let h be a consistent heuristic for P .When initialized with h, a sequence of trials of any of aLSS-LRTA * , daLSS-LRTA * , aRTAA * , or daRTAA * , converges to an optimal solution.
Proof: First, observe that since the heuristic is admissible, it remains admissible after a number of trials are run.This is a consequence of Theorems 3 and 4. Hence, for every state s from which a goal state can be reached, h(s) is bounded from above by the (finite amount) h * (s).
On the other hand, by Lemma 3, the h-values of states from which a solution is reachable can only increase a finite number of times.After a sequence of trials the value of h thus converges; i.e., at least for one complete trial, h(s) is not changed, for every s in P .We can also assume that in such a trial, the value of c does not change either, since once h converges, the same path of states is always followed and thus no new cost increases are made.
Let us focus on a run of any of our algorithms in which both h and c do not change.Observe that this means that h n (s) = h 0 (s) for any n (recall h 0 is the heuristic given as input to the algorithm).Independent of the algorithm used, this implies the algorithm always moves to the best state in Open.Let s 1 . . .s m be the sequence of states that were assigned to s next during the execution (s m is thus a goal state).Observe that since c does not change along the execution, states s 1 . . .s m are actually visited by the agent.Depending on the algorithm that is used, by Proposition 1 or 2, we know: where k(s i , s i+1 ) is the cost of an optimal path between s i and s i+1 .Since the heuristic is consistent h(s m ) = 0, and thus with the family of equations in (13) we conclude h(s 0 ) is equal to m−1 i=0 k(s i , s i+1 ), which corresponds to the cost of the path traversed by the agent.But we know that h is also admissible, so: Since h * (s 0 ) is the cost of an optimal solution, we conclude the path found has an optimal cost.

Empirical Evaluation
We evaluated our algorithms at solving real-time navigation problems in unknown environments.LSS-LRTA * and RTAA * are used as a baseline for our comparisons.For fairness, we used comparable implementations that use the same underlying codebase.For example, all search algorithms use the same implementation for binary heaps as priority queues and break ties among cells with the same f -values in favor of cells with larger g-values, which is known to be a good tie-breaking strategy.We carried out our experiments over two sets of benchmarks: deployed game maps and mazes.We used twelve maps from deployed video games to carry out the experiments.The first six are taken from the game Dragon Age, and the remaining six are taken from the game StarCraft.The maps were retrieved from Nathan Sturtevant's pathfinding repository. 1 In addition, we used four maze maps taken from the HOG2 repository. 2 They are shown in Figure 5.All results were obtained using a Linux machine with an Intel Xeon CPU running at 2GHz and 12 GB RAM.
All maps are regarded as undirected, eight-neighbor grids.Horizontal and vertical movements have cost 1, whereas diagonal movements have cost √ 2. We used the octile distance (Sturtevant & Buro, 2005) as heuristic.For our evaluation we ran all algorithms for 10 different lookahead values.For each map, we generate 500 test cases.For each test case we choose the start and goal cells randomly.
In the presentation of our results we sometimes use the concept of improvement factor.When we say that the improvement factor of an algorithm A with respect to B in terms of average solution cost is n, it means that on average A produces solutions that are n times cheaper than the ones found by B.
Next we describe the different views of the experimental data that is shown in plots and tables.We then continue to draw our experimental conclusions.

An Analysis of the LSS-LRTA * Variants
This section analyzes the performance of LSS-LRTA * , aLSS-LRTA * and daLSS-LRTA * .Figure 6 shows two plots for the average solution costs versus the average planning time per episode for the three algorithms in games and mazes benchmarks.Planning time per planning episode is an accurate measure of the effort carried out by each of the algorithms.Thus these plots illustrate how solution quality varies depending on the effort that each algorithm carries out.
Regardless of the search effort, we observe aLSS-LRTA * slightly but consistently outperforms LSS-LRTA * in solution cost.In games benchmarks we observe that for equal search effort, aLSS-LRTA * produces average improvement factors between 1.08 and 1.20 in terms of solution cost.In mazes, on the other hand, improvement factors are between 1.04 and 1.25.In games, the largest improvements are observed when the lookahead parameter (and hence the search time per episode) is rather small.Thus aLSS-LRTA * 's advantage over LSS-LRTA * is more clearly observed when tighter time constraints are imposed on planning episodes.
Often times results in real-time search literature are presented in the form of tables, with search performance statistics reported per each lookahead value.We provide such tables the appendix of the paper (Tables 5 and 6).An important observation that can be drawn from the tables is that time per planning episode in LSS-LRTA * and aLSS-LRTA * are very similar for a fixed lookahead value; indeed, the time per planning episode of aLSS-LRTA * is only slightly larger than that of LSS-LRTA * .This is interesting since it shows that the worst-case asymptotic complexity does not seem to be achieved for aLSS-LRTA * (cf.Table 2).
The experimental results show that daLSS-LRTA * 's more refined mechanism for escaping depressions is better than that of aLSS-LRTA * .For any given value of the search effort, daLSS-LRTA * consistently outperforms aLSS-LRTA * by a significant margin in solution cost in games and mazes.daLSS-LRTA * also outperforms aLSS-LRTA * in total search time, i.e., the overall time spent searching until a solution is found.Details can be found in Tables 5 and 6.When the search effort for each algorithm is small, daLSS-LRTA * 's average solution quality is substantially better than aLSS-LRTA * 's; the improvements are actually close to an order of magnitude.daLSS-LRTA * consistently outperforms LSS-LRTA * by a significant margin in total search time and solution quality, independent of the search effort employed.In terms of solution cost daLSS-LRTA * produces average improvement factors with respect to LSS-LRTA * between 1.66 and an order of magnitude in the game benchmarks, and produces average improvement factors between 1.49 and an order of magnitude in the mazes benchmarks.For a fixed lookahead (see Tables 5 and 6 for the specific numbers), the time spent per planning episode by daLSS-LRTA * is larger than time spent per planning episode by LSS-LRTA * because daLSS-LRTA * makes more heap percolations than LSS-LRTA * .However, for small values of the lookahead parameter, daLSS-LRTA * obtains better solutions using less time per planning episode than LSS-LRTA * used with a much larger lookahead.For example, in game maps, with a lookahead parameter equal to 32, daLSS-LRTA * obtains better solutions than LSS-LRTA * with the lookahead parameter equal to 128, requiring, on average, 2.6 times less time per planning episode.In mazes, with a lookahead parameter equal to 16, daLSS-LRTA * obtains better solutions than LSS-LRTA * with the lookahead parameter equal to 64, requiring, on average, 2.4 times less time per planning episode.
For low values of the lookahead parameter (i.e.very limited search effort) daLSS-LRTA * obtains better solutions in less time per planning episode than aLSS-LRTA * used with a much larger lookahead.For example, in game maps, with a lookahead parameter equal to 1, daLSS-LRTA * obtains better solutions than aLSS-LRTA * with the lookahead parameter equal to 16, requiring, on average, 14.1 times less time per planning episode.On the other hand, in mazes with a lookahead parameter equal to 1, daLSS-LRTA * obtains better solutions than aLSS-LRTA * with the lookahead parameter equal to 16, requiring, on average, 11.6 times less time per planning episode.
For a fixed lookahead (see Tables 5 and 6), the time taken by daLSS-LRTA * per planning episode is larger than the time taken by aLSS-LRTA * per planning episode.This increase can be explained because, on average, daLSS-LRTA * 's open list grows larger than that of aLSS-LRTA * .This is due to the fact that, in the benchmarks we tried, daLSS-LRTA * tends to expand cells that have less obstacles around than aLSS-LRTA * does.As a result,  daLSS-LRTA * expands more cells in the learning phase or makes more heap percolations in the lookahead phase than aLSS-LRTA * .Results show that, among the LSS-LRTA * variants, daLSS-LRTA * is the algorithm with the best performance.In fact daLSS-LRTA * is clearly superior to LSS-LRTA * .Of the 60,000 runs (12 maps × 500 test cases × 10 lookahead-values) in game benchmarks, daLSS-LRTA * obtains a better solution quality than LSS-LRTA * in 69.9% of the cases, they tie in 20.9% of the cases, and LSS-LRTA * obtains a better-quality solution in only 9.2% of the cases.
Of the 20,000 (4 maps × 500 test cases × 10 lookahead-values) runs in mazes benchmarks, daLSS-LRTA * obtains a better solution quality than LSS-LRTA * in 75.1% of the cases, they tie in 3.3% of the cases, and LSS-LRTA * obtains a better-quality solution in 21.7% of the cases.

An Analysis of the RTAA * Variants
In this section we analyze the relative performance of RTAA * , aRTAA * , and daRTAA * .Figure 7 shows two plots of the average solution costs versus the average effort carried out per search episode.
For the same search effort, we do not observe significant improvements of aRTAA * over RTAA * .Indeed, only for small values of the average time per search episode does aRTAA * improve the solution quality upon that of RTAA * .In general, however, both algorithms seem to have very similar performance.
On the other hand, the results show that daRTAA * 's mechanism for escaping depressions is substantially better than that of aRTAA * .For small values for the lookahead parameter (and hence reduced search effort), daRTAA * obtains better solutions than the other variants used with a much larger lookahead.Indeed, for limited search effort, daRTAA * is approximately an order of magnitude better than the two other algorithms.For example, in game maps, with a lookahead parameter equal to 1, daRTAA * obtains better solutions than aRTAA * with the lookahead parameter equal to 16, requiring, on average, 10.4 times less time per planning episode.
daRTAA * substantially improves RTAA * , which is among the best real-time heuristic search algorithms known to date.In game maps, daRTAA * needs only a lookahead parameter of 16 to obtain solutions better than RTAA * with the lookahead parameter of 64.With those values, daRTAA * requires about 2.3 times less time per planning episode than RTAA * .
Our results show that daRTAA * is the best-performing algorithm of the RTAA * family.Of the 60,000 runs in game-map benchmarks, daRTAA * obtains a better solution quality than RTAA * in 71.2% of the cases, they tie in 20.5% of the cases, and RTAA * obtains a better-quality solution in only 8.3% of the cases.Of the 20,000 runs in mazes, daRTAA * obtains a better solution quality than RTAA * in 78.0% of the cases, they tie in 2.7% of the cases, and RTAA * obtains a better-quality solution in 19.4% of the cases.
7.3 daLSS-LRTA * Versus daRTAA * daRTAA * , the best performing algorithm among the RTAA * variants, is also superior to daLSS-LRTA * , the best-performing algorithm of the LSS-LRTA * variants.Figure 8 shows average solution costs versus search effort, in game maps and mazes.
As can be seen in the figure, when the lookahead parameter is small (i.e., search effort is little), the performance of daRTAA * and daLSS-LRTA * is fairly similar.However, as more search is allowed per planning episode, daRTAA * outperforms daLSS-LRTA * .For example, in games benchmarks, daRTAA * , when allowed to spend 0.08 milliseconds per episode, will obtain solutions comparable to those of daLSS-LRTA * but when allowed do spend 0.18 millisecconds per episode.
Furthermore, the slopes of the curves are significantly more favorable to daRTAA * over daLSS-LRTA * .This can be verified in both types of benchmarks and is important since it speaks to an inherent superiority of the RTAA * framework when time per planning episode is the most relevant factor.

An Analysis of Disaggregated Data
The performance of real-time algorithms usually varies depending on the map used.To illustrate how the algorithms perform in different maps, Figure 9 shows the improvement on solution cost of daLSS-LRTA * over LSS-LRTA * on 4 game and 4 maze benchmarks.They confirm that improvements can be observed in all domains thus showing that average values are representative of daLSS-LRTA * 's behavior in individual benchmarks.Although aLSS-LRTA * and daLSS-LRTA * outperform LSS-LRTA * on average, there are specific cases in which the situation does not hold.Most notably, we observe that in one of the maze benchmarks daLSS-LRTA * does not improve significantly with respect to LSS-LRTA * for large values of the lookahead parameter.We discuss this further in the next section.Figure 10 shows also the improvement factors of daRTAA * over RTAA * .In this plot, the different algorithms show a similar relative performance in relation to the LSS-LRTA * variants.and maze benchmarks (right).An improvement factor equal to n indicates that the solution found by our algorithm is n times cheaper than the one found by the original algorithm.

A Worst-Case Experimental Analysis
Although all our algorithms perform a resource-bounded computation per planning episode, it is hard to tune the lookahead parameter in such a way that both LRTA * and daLSS-LRTA * will incur the same worst-case planning effort.This is because the time spent in extracting the best state from the open list depends on the structure of the search space expanded in each lookahead phase.
In this section we set out to carry an experimental worst-case analysis based on a theoretical worst-case bound.This bound is obtained from the worst-case effort per planning step as follows.If RTAA * performs k expansions per planning episode, then the open list could contain up to 8k states.This is because each state has at most 8 neighbors.In the worst case, the effort spent in adding all such states to the open list would be 8k log 8k.On the other hand, daRTAA * would make the same effort to insert those states into the open list, but would incur an additional cost of 8k log 8k, in the worst-case, to remove all states from the open list.Therefore, in a worst-case scenario, given a lookahead parameter equal to k, daRTAA * will make double the effort than RTAA * makes for the same parameter.
Based on that worst-case estimation, Figure 11 presents the performance of the RTAA * variants, displacing the RTAA * curve by a lookahead factor of 2. We conclude that in this worst-case scenario daRTAA * still clearly outperforms RTAA * .Gains vary from one order of magnitude, for low values of the lookahead parameter, to very similar performance when the lookahead parameter is high.
We remark, however, that we never observed this worst-case in practice.For example, in our game benchmarks, RTAA * , when used with a lookahead parameter 2k spends, on average 50% more time per planning episode than daRTAA * used with lookahead parameter k. Figure 11: Plots showing the average time per planning episode and average solution cost per lookahead parameter, adjusting the performance of RTAA * using a theoretical worst-case bound of 2. As such, for RTAA * , the average cost reported for for a lookahead of k actually corresponds to the cost obtained for a lookahead 2k.Costs are shown on a log-scale.

Discussion
There are a number of aspects of our work that deserve a discussion.We focus on two of them.First, we discuss the setting in which we have evaluated our work, which focused on showing performance improvements in the first trial for a search in an a priori unknown domain, without considering other settings.Second, we discuss in which scenarios our algorithms may not exhibit average performance improvements that were shown in the previous section.

The Experimental Setting: Unknown Environments, First Trial
Our algorithm is tailored to solving quickly a search problem in which the environment is initially unknown.This setting has several applications, including goal-directed navigation in unknown terrain (Koenig et al., 2003;Bulitko & Lee, 2006).It has also been widely used to evaluate real-time heuristic search algorithms (e.g., Koenig, 1998;Hernández & Meseguer, 2005;Bulitko & Lee, 2006;Hernández & Meseguer, 2007;Koenig & Sun, 2009).
On the other hand, we did not present an evaluation of our algorithm in environments that are known a priori.In a previous paper (Hernández & Baier, 2011d), however, we showed that aLSS-LRTA * obtains similar improvements over LSS-LRTA * when the environment is known.However, we omit results on known environments since RTAA * and LSS-LRTA * are not representative of the state of the art in those scenarios.Indeed, algorithms like TBA* (Björnsson et al., 2009) outperform LSS-LRTA * significantly.It is not immediately obvious how to incorporate our techniques to algorithms like TBA*.
We did not present experimental results regarding convergence after several successive search trials.Recall that in this setting, the agent is "teleported" to the initial location and a new search trial is carried out.Most real-time search algorithms-ours included-are guaranteed to eventually find an optimal solution.Our algorithms do not particularly excel in this setting.This is because the heuristic value of fewer states is updated, and hence the heuristic values for states in the search space converges slowly to the correct value.As such, generally more trials are needed to converge.
Convergence performance is important for problems that are solved offline and in which real-time approaches may be more adequate for computing an approximation of the optimal solution.This is the case of the problem of computing an optimal policy in MDPs using Real-Time Dynamic Programming (Barto, Bradtke, & Singh, 1995).We are not aware, however, of any application in deterministic search in which searching offline using realtime search would yield better performance than using other suboptimal search algorithms (e.g., Richter, Thayer, & Ruml, 2010;Thayer, Dionne, & Ruml, 2011).Indeed, Wilt, Thayer, and Ruml (2010) concluded that real-time algorithms, though applicable, should not be used for solving shortest path problems unless there is a need for real-time action.

Bad Performance Scenarios
Although our algorithms clearly outperform its originators LSS-LRTA * and RTAA * on average, it is possible to contrive families of increasingly difficult path-finding tasks in which our algorithms perform worse than their respective predecessors.
Consider for example the 4-connected grid-world scenario of size 7×n shown in Figure 12.The goal of the agent is to reach the state labeled with G, starting from S. Assume furthermore that to solve this problem we run aRTAA * or aLSS-LRTA * , with lookahead parameter equal to 1, and that ties are broken such that the up movement has priority over the down movement.In the initial state both algorithms will determine that the initial state (cell E3) is in a heuristic depression and thus will update the heuristic of cell E3.Cell E3 is now marked as in a depression.Since both cells D3 and F3 have the same heuristic value and ties are broken in favor of upper cells, the agent is then moved to cell D3.In later iterations, the algorithm will not prefer to move to cells that have been updated and therefore the agent will not go back to state E3 unless it is currently in D3 and (at least) C3 is also marked.However, the agent will not go back to D3 quickly.Indeed, it will visit all states to the right of Wall 1 and Wall 2 before coming back to E3.This happens because, as the algorithm executes, it will update and mark all visited states, and will never prefer to go back to a previously marked position unless all current neighbors are also marked.
In the same situation, RTAA * and LSS-LRTA * , run with lookahead parameter 1 will behave differently depending on the tie-breaking rules.Indeed, if the priority is given by up (highest), down, right, and left (lowest), then both RTAA * and LSS-LRTA * find the goal fairly quickly as they do not have to visit states to the right of the walls.Indeed, since the tie-breaking rules prefer a move up, the agent reaches cell A3 after 4 moves, and then proceeds straight to the goal.In such situations, the performance of aRTAA * or aLSS-LRTA * can be made arbitrarily worse than that of RTAA * or LSS-LRTA * , as n is increased.
A quite different situation is produced if the tie-breaking follows the priorities given by up (highest), right, down, and left (lowest).In this case all four algorithms have to visit the states to the right of both walls.Indeed, once A3 is reached, there is a tie between the h-value of B3 and A4.The agent prefers moving to A4, and from there on it continues moving to the right of the grid in a zig-zag fashion.After investigating executions of our "da-" algorithms in the maze512-4-0 benchmark (performance is shown in Figures 9 and 10), we believe that the lack of improvement in this particular benchmark can be explained by the situation just described.This benchmark is a 512 × 512 maze in which corridors have a 4-cell width.For low lookahead values, the number of updates is not high enough to "block" the corridors.As such, for low values of the lookahead parameter the increase in performance is still reasonably good.As the lookahead increases, the algorithm updates more states in one single iteration, and, as a result, chances are that good paths may become blocked.
Interestingly, however, we do not observe this phenomenon on mazes with wider corridors or on game maps.A necessary condition to "block" a corridor that leads to a solution is that the agent has sufficient knowledge about the borders of the corridor.In mazes with narrow corridors this may happen with relative ease, as the agent only needs a few moves to travel between opposite walls.In grids in which corridors are wide however, knowledge about the existence of obstacles (walls) is hard to obtain by the agent, and, thus, the chances of updating and blocking, a corridor that leads to a solution are lower.
We believe that it is possible to prove that our algorithms are always better or always worse for specific search space topologies.We think, nevertheless, that such an analysis may be hard to carry out, and that its practical significance may be limited.Therefore we decided to exclude it from the scope of this work.On the other hand, we think that the impressive performance exhibited by our algorithms in many benchmarks is sufficiently strong in favor of using our algorithms in domains that do not contain narrow corridors.

Related Work
Besides LSS-LRTA * and RTAA * , there are a number of real-time search algorithms that can be used in a priori unknown environments.LRTA * (k) and LRTA * LS (k) (Hernández & Meseguer, 2005, 2007) are two algorithms competitive with LSS-LRTA * that are capable of learning the heuristic of several states at the same time; the states for which the heuristic is learned is independent from those expanded in the lookahead phase.They may escape heuristic depressions more quickly than LRTA * , but its action selection mechanism is not aware of heuristic depressions.eLSS-LRTA * is a preliminary version of aLSS-LRTA * we presented in an extended abstract (Hernández & Baier, 2011a).It is outperformed by aLSS-LRTA * on average, as it usually becomes too focused on avoiding depressions.
Our algorithms have been designed in order to find good-quality solutions on the first search trial.Other algorithms described in the literature have been designed with different objectives in mind.For example, RIBS (Sturtevant, Bulitko, & Björnsson, 2010) is a realtime algorithm specifically designed to converge quickly to an optimal solution.It will move the agent as if an iterative-deepening A * search was carried out.As such the first solution it finds is optimal.As a consequence, RIBS potentially requires more time to find one solution than LSS-LRTA * does, but if an optimal solution is required RIBS will likely outperform LSS-LRTA * run to convergence.f -LRTA* (Sturtevant & Bulitko, 2011) is another recent real-time search algorithm which builds upon ideas introduced by RIBS, in which the gcost of states is learned through successive trials.It has good convergence performance, but needs to do more computation per planning step than LSS-LRTA * .
Incremental A * methods, like D* (Stentz, 1995), D*Lite (Koenig & Likhachev, 2002), Adaptive A* (Koenig & Likhachev, 2006a), and Tree Adaptive A* (Hernández, Sun, Koenig, & Meseguer, 2011), are search methods that also allow solving goal-directed navigation problems in unknown environments.If the first-move delay is required to be short, incremental A* methods cannot be used since they require to compute a complete solution before starting to move.Real-time search remains the only applicable strategy for this task when limited time is allowed per planning episode.
Less related to our work are algorithms that abide to real-time search constraints but that assume the environment is known in advance and that sufficient time is given prior to solving the problem, allowing preprocessing.Examples are D LRTA * (Bulitko, Luštrek, Schaeffer, Björnsson, & Sigmundarson, 2008) and kNN-LRTA * (Bulitko et al., 2010), tree subgoaling (Hernández & Baier, 2011b), or real-time search via compressed path databases (Botea, 2011).
Finally, the concept of cost-sensitive depression in real-time search could be linked to other concepts used to describe the poor performance of planning algorithms.For example, Hoffmann (2005Hoffmann ( , 2011) ) analyzed the existence of plateaus in h + , an effective admissible domain-independent planning heuristic, and how this negatively affects the performance of otherwise fast planning algorithms.Cushing, Benton, and Kambhampati (2011) introduced the concept of ε-traps that is related to poor performance of best-first search in problems in which action costs have a high variance.ε-traps are areas of the search space connected by actions of least cost.As such, the h-values of states in ε-traps is not considered in their analysis.Although we think that the existence of cost-sensitive heuristic depressions does affect the performance of A * , the exact relation between the performance of A * and heuristic depressions does not seem to be obvious.

Summary and Future Work
We have presented a simple principle for guiding real-time search algorithms away from heuristic depressions.We proposed two alternative approaches for implementing the principle: mark-and-avoid and move-to-border.In the first approach, states that are proven to be in a depression are marked in the update phase, and then avoided, if possible, when deciding the next move.In the second approach, the algorithm selects as the next move the state that seems closer to the border of a depression.
Both approaches can be implemented efficiently.Mark-and-avoid requires very little overhead, which results in an almost negligible increment in time per planning episode.Move-to-border, on the other hand, requires more overhead per planning episode, but, given a time deadline per planning episode, it is able to obtain the best-quality solutions.
Experimentally, we have shown that in goal-directed navigation tasks in unknown terrain, our algorithms outperform their predecessors RTAA * and LSS-LRTA * .Indeed, the algorithms based on move-to-border-daLSS-LRTA * and daRTAA * -are significantly more efficient than LSS-LRTA * and RTAA * , especially when the lookahead parameter is a small value.
The four algorithms proposed have good properties: in undirected, finite search spaces, they are guaranteed to find a solution if such a solution exists.Moreover, they converge to an optimal solution after running a number of search trials.
Depression avoidance is a principle applicable to other real-time heuristic search algorithms.Indeed, we think it could be easily incorporated into LRTA * (k), LRTA * LS (k), and P-LRTA* (Rayner et al., 2007).All those algorithms have specialized mechanisms for updating the heuristic, but the mechanism to select the next state is just like LSS-LRTA * 's run with lookahead parameter equal to 1.We think significant improvements could be achieved if the procedure to select the next movement was changed by daLSS-LRTA * 's.We also believe depression avoidance could be incorporated into multi-agent real-time search algorithms (e.g., Knight, 1993;Yokoo & Kitamura, 1996;Kitamura, Teranishi, & Tatsumi, 1996).Now we use that the h-value of s and s are not updated (h n (s) = h n+1 (s) and h n (s ) = h n+1 (s )), and the fact that the cost function increases to write:

Acknowledgments
which finishes the proof for this case.
In all three cases we proved the desired inequality and therefore we conclude the heuristic h n+1 is consistent with respect to cost function c n+1 .

B.2 Proof of Theorem 4
We establish that, for any pair of neighbor states, s and s , h n+1 (s) ≤ c n+1 (s, s ) + h n+1 (s ).We divide the rest of the argument in three cases.Case 1.Both s and s are in Closed.We have that for some s * in Open.Subtracting ( 24) from ( 23), we obtain: h n+1 (s) − h n+1 (s ) = g(s ) − g(s). (25) Since h n is consistent g(s) and g(s ) correspond to the cost of the shortest path between s current and, respectively, s and s .Thus g(s ) = k n (s current , s ) and g(s) = k n (s current , s), and therefore: h n+1 (s) − h n+1 (s ) = k n (s current , s ) − k n (s current , s).
Let us consider a path from s current to s that goes optimally to s, and then goes from s to s .The cost of such a path must be at least k n (s current , s ).In other words: k n (s current , s ) ≤ k n (s current , s) + c n (s, s ), which directly implies: Now we combine ( 27) and ( 26) to obtain: And, finally, since c n ≤ c n+1 we conclude that: which finishes the proof for this case.
Case 2. One state among s and s is in Closed, and the other state is not in Closed.
Without loss of generality, assume s ∈ Closed.Since s is not in Closed, it must be in Open, because s was expanded by A * and s is a neighbor of s.For some state s * in Open, we have that Again we use the fact that, with the consistent heuristic h n , A * expands nodes with increasing f -values.Note that s * is the state that would have been expanded next by A * , and that s would have been expanded later on.Moreover, as soon as s would have been expanded the g-value for s is the optimal cost of the path from s current to s , k n (s current , s ).Therefore, we can write: as k n (s current , s ) + h n (s ) is the f -value of s upon expansion.Adding up ( 30) and ( 31), we obtain: h n+1 (s) ≤ k n (s current , s ) − g(s) + h n (s ) However, since s is in Closed, g(s) is the cost of an optimal path from s current to s, and thus: We use now the same argument of the previous case to conclude that: Combining ( 31) and ( 33) we obtain: Since s is not in closed, h n+1 (s ) = h n (s).Furthermore, we know that c n ≤ c n+1 .Substituting in (34), we obtain: which allows us to conclude the proof for this case.
Case 3.Both s and s are not in Closed.The proof is the same as that for Case 3 in Theorem 3. In all three cases we proved the desired inequality and therefore we conclude the heuristic h n+1 is consistent with respect to cost function c n+1 .

B.3 An Appendix for the Proof of Theorem 5
This section describes the proof of Theorem 5 for the specific case of aRTAA * .
Let D be the maximal connected component of states connected to s such that (1) all states in D are in Closed after the call to A * in iteration n, and (2) any state s d in D is such that h n+1 (s d ) > h n (s d ).We prove that D is a cost-sensitive heuristic depression of h n .
Let s be a state in the boundary of D; as argued for the case of aLSS-LRTA * , we can show that h n (s ) = h n+1 (s ).Now, let s d be a state in D. We continue the proof by showing

9s
current ← current agent position 10 update action costs (if they have increased)

Algorithm 2 :
Bounded A * lookahead 1 procedure A * () 2 for each s ∈ S do g(s) s ∈ Open with minimum f -value is such that s ∈ G and expansions < k do 8 Remove state s with smallest f -value from Open 9 Insert s into Closed 10 for each s ∈ Succ(s) do 11 if g(s ) > g(s) + c(s, s ) then 12 g(s ) ← g(s) + c(s, s ) 13 s .back= s 14 if s ∈ Open then remove s from Open 15 Insert s in Open 16 expansions ← expansions + 1 Algorithm 3: Selection of the Best State used by LSS-LRTA * , RTAA * , and other algorithms.

Figure 1 :
Figure 1: Average solution cost obtained by LSS-LRTA * and RTAA * versus planning time per episode on 12 game maps.

Algorithm 6 :
Modified Dijkstra Procedure used by aLSS-LRTA * . 1 procedure ModifiedDijkstra () 2 if first run then 3 for each s ∈ S do s.updated ← f alse /* initialization of update flag */ 4 for each s ∈ Closed do h(s) ← ∞ 5 while Closed = ∅ do 6 Extract an s with minimum h-value from Open 7 if h(s) > h 0 (s) then s.updated = true 8 if s ∈ Closed then delete s from Closed 9

Algorithm 7 :
Selection of the next state used by aLSS-LRTA * and aRTAA * 1 function Extract-Best-State () 2 if Open contains an s such that s.updated = f alse then 3 s ← argmin s ∈Open∧s .updated=false g(s ) + h(s ) 4 else 5 s ← argmin s ∈Open g(s ) + h(s ) 6 return s ;

Figure 3 :Figure 4 :
Figure3: First 4 iterations of LSS-LRTA * (left) and aLSS-LRTA * (right) with lookahead equal to 2 in a 4-connected grid world with unitary action costs, where the initial state is D2, and the goal is D4.Numbers in cell corners denote the g-value (upper left), f -value (upper right), h-value (lower left), and new h-value of an expanded cell after an update (lower right).Only cells that have been in a closed list show four numbers.Cells generated but not expanded by A * (i.e., in Open) show three numbers, since their h-values have not been updated.Triangles ( ) denote states with updated flag set to true after the search episode.The heuristic used is the Manhattan distance.We assume ties are broken by choosing first the right then bottom then the left and then top adjacent cell.The position of the agent is given by the dot.A grid cell is shaded (gray) if it is a blocked cell that the agent has not sensed yet.A grid cell is black if it is a blocked cell that the agent has already sensed.The best state chosen to move the agent to after lookahead search is pointed by an arrow.

Figure 5 :
Figure 5: Upper row: the four maze maps used to test our approach; each of 512×512 cells.Lower row: 4 out of the 12 game maps used.The first two come from Dragon Age: Origins; the remaining 2 are from StarCraft.

Figure 6 :
Figure 6: Plots showing the average solution cost found by the LSS-LRTA * variants versus average planning time per episode, measured in milliseconds.(a) shows stats on the game-map benchmarks, and (b) on the mazes benchmarks.Times are shown in milliseconds.Costs are shown on a log-scale.

Figure 7 :
Figure 7: Plots showing the average solution cost found by the RTAA * variants versus average planning time per episode.(a) shows stats on the game-maps benchmarks, and (b) on the mazes benchmarks.Costs are shown on a log-scale.

Figure 8 :
Figure 8: Plots showing the average solution cost found by the daRTAA * and daLSS-LRTA * versus average planning time per episode.(a) shows stats on the game-maps benchmarks, and (b) on the mazes benchmarks.Costs are shown on a log-scale.

Figure 9 :
Figure 9: Cost improvement factor of daLSS-LRTA * over LSS-LRTA * , in game maps (left)and maze benchmarks (right).An improvement factor equal to n indicates that the solution found by our algorithm is n times cheaper than the one found by the original algorithm.

Figure 10 :
Figure 10: Cost improvement factor of daRTAA * over RTAA * , in game maps (left) and maze (right) benchmarks.An improvement factor equal to n indicates that the solution found by our algorithm is n times cheaper than the one found by the original algorithm.

Figure 12 :
Figure 12: A situation in which the relative performance between LSS-LRTA * and aLSS-LRTA * changes depending on the value of n. S is the start state, and G is the goal.Ties are broken in favor of upper cells.

value from Open 5 if s ∈ Closed then delete s from Closed 6
1 procedure ModifiedDijkstra ()2 for each state s in Closed do h(s) ← ∞ 3 while Closed = ∅ do 4Extract an s with minimum h-

Table 1 :
Average results for the 12 game maps.For lookahead value k, we report the solution cost per test case (Avg.Cost), and four measures of efficiency: the runtime per planning episode (Time/ep) in milliseconds, the number of cell expansions per planning episode (Exp/ep), the number of heap percolations per planning episode (Per/ep) and the runtime per test case (Time) in milliseconds.All results were obtained using a Linux machine with an Intel Xeon CPU running at 2GHz and 12 GB RAM.

Table 2 :
Procedures used for the update phase and for the selection of the next state for each of the algorithms discussed in the paper.Worst-case time complexity for each procedure is included assuming the Open list is implemented as a binary heap.M corresponds to |Open| + |Closed|, N is equal to |Closed|, and L is |Open|.

Table 3 :
Average results of RTAA * variants over mazes.For a given lookahead parameter value, we report the average solution cost (Avg.Cost), average number of planning episodes (# Planning Episodes), total runtime (Total Time), average runtime per search episode (Time per Episode), average number of expansions per episode (Exp.per ep.), average number of percolations per planning episode (Perc.per ep.).All times are reported in milliseconds.

Table 4 :
Average results of RTAA * variants over game maps.For a given lookahead parameter value, we report the average solution cost (Avg.Cost), average number of planning episodes (# Planning Episodes), total runtime (Total Time), average runtime per search episode (Time per Episode), average number of expansions per episode (Exp.per ep.), average number of percolations per planning episode (Perc.per ep.).All times are reported in milliseconds.

Table 5 :
Average results of LSS-LRTA * variants over mazes.For a given lookahead parameter value, we report the average solution cost (Avg.Cost), average number of planning episodes (# Planning Episodes), total runtime (Total Time), average runtime per search episode (Time per Episode), average number of expansions per episode (Exp.per ep.), average number of percolations per planning episode (Perc.per ep.).All times are reported in milliseconds.Results obtained over a Linux PC with a Pentium QuadCore 2.33 GHz CPU and 8 GB RAM.

Table 6 :
Average results of LSS-LRTA * variants over game maps.For a given lookahead parameter value, we report the average solution cost (Avg.Cost), average number of planning episodes (# Planning Episodes), total runtime (Total Time), average runtime per search episode (Time per Episode), average number of expansions per episode (Exp.perep.),averagenumber of percolations per planning episode (Perc.perep.).All times are reported in milliseconds.Results obtained over a Linux PC with a Pentium QuadCore 2.33 GHz CPU and 8 GB RAM.Appendix B. Additional Proofs for TheoremsB.1 Proof of Theorem 3We establish that, for any pair of neighbor states, s and s , h n+1 (s) ≤ c n+1 (s, s ) + h n+1 (s ).We divide the rest of the argument in three cases.Case 1.Both s and s are in Closed.Then, by Proposition 1,h n+1 (s ) = k n (s , s ) + h n (s ),(14)for some s ∈ Open.On the other hand, again by Proposition 1,h n+1 (s) = min s b ∈Open k n (s, s b ) + h n (s b ),and thush n+1 (s) ≤ k n (s, s ) + h n (s ), (15)since s is an element of Open.However, because k n (s, s ) is the cost of the shortest path between s and s , we know that k Finally, since h is non-decreasing (Theorem 1), we have h n (s ) ≤ h n+1 (s ), which allows us to writeh n+1 (s) ≤ c n+1 (s, s ) + h n+1 (s ),(20)which finishes the proof for this case.Case 2. One state among s and s is in Closed, and the other state is not in Closed.Without loss of generality, assume s ∈ Closed.Since s is not in Closed, it must be in Open, because s was expanded by A * and s is a neighbor of s.By Proposition 1 we know:h n+1 (s) = min s b ∈Open k n (s, s b ) + h n (s b ),but since s is a particular state in Open, we have:h n+1 (s) ≤ c n (s, s ) + h n (s ).Since c n ≤ c n+1 , we obtain:h n+1 (s) ≤ c n+1 (s, s ) + h n (s ),which concludes the proof for this case.Case 3.Both s and s are not in Closed.Since h n is consistent:h n (s) ≤ c n (s, s ) + h n (s )(21) n (s, s ) ≤ c n (s, s ) + k n (s , s ) (16)Adding up (15) and (16), we obtainh n+1 (s) ≤ c n (s, s ) + k n (s , s ) + h n (s )(17)Using Equation14we substitute k n (s , s ) + h n (s ) in Inequality 17, obtaining:h n+1 (s) ≤ c n (s, s ) + h n (s ).(18)Since the cost function can only increase, we have that c n (s, s ) ≤ c n+1 (s, s ), and hence:h n+1 (s) ≤ c n+1 (s, s ) + h n (s ),(19)