This archive contains detailed empirical data for four experiments,
each within a separate directory.

Contents
========

main
----
This directory contains the data for the main experiment described in
the article. There is one subdirectory for each domain, which in turn
contains a subdirectory for each planner or planner configuration
considered for this domain. The planner subdirectories contain one
file for each task that is solved within the timeout, with information
about run-time, plan length, as well as the plan that is generated.

deferred-evaluation
-------------------
This directory contains data for an experiment to evaluate the
usefulness of deferred heuristic evaluation. The data is structured in
the same way as for the main experiment, except that we consider
search time instead of total solution time, as we want to evaluate the
impact of deferred heuristic evaluation on search performance.

For each domain, two planner configurations are considered: One is the
best configuration of Fast Downward for this domain according to the
main experiment [1,2]. The other configuration is identical to the
first, except that deferred heuristic evaluation is replaced with a
standard node expansion scheme, where the heuristic values of all
successors are computed upon expansion.

Because this experiment is not discussed in detail in the article, the
domain directories contain summaries about the speedups obtained by
the use of deferred heuristic evaluation in that domain, and the
experiment directory contains an overall summary of the results.

successor-generators
--------------------
This directory contains data for an experiment to evaluate the
usefulness of successor generators. The data is structured in the same
way as for the main experiment, except that we consider search time
instead of total solution time, as we want to evaluate the impact of
successor generators on search performance.

For each domain, two planner configurations are considered: One is the
best configuration of Fast Downward for this domain according to the
main experiment [1]. The other configuration is identical to the
first, except that successor generators are replaced with a simple
algorithm that tests each operator for applicability individually.

Because this experiment is not discussed in detail in the article, the
domain directories contain summaries about the speedups obtained by
the use of successor generators in that domain, and the experiment
directory contains an overall summary of the results.

iterative-broadening-linearization
----------------------------------
This directory contains data for an experiment to evaluate the
usefulness of the linearization technique of focused
iterative-broadening search discussed in the article. The data is
structured in the same way as for the main experiment, except that we
consider search time instead of total solution time, as we want to
evaluate the impact of linearization within focused
iterative-broadening search on search performance.

In this experiment, we only consider the focused iterative-broadening
algorithm. Instead of varying the algorithm, we vary the input tasks:
In the directories "normal-goal", we consider the regular benchmark
tasks. In the directories "monolithic-goal", we consider equivalent
monolithic tasks, i.e., tasks with a single goal. On monolithic tasks,
focused iterative-broadening search reduces to the reach-one-goal
procedure.

A monolithic task is obtained from the original task by introducing a
new variable "goal-solved" with domain {false, true}, which is
initially false, introducing a new operator "solve-goal" which has the
original goal of the planning task as a precondition and sets
"goal-solved" to true, and replacing the original goal by a single
condition which requires "goal-solved" to be true.

Because this experiment is not discussed in detail in the article, the
domain directories contain summaries about the speedups obtained by
the use of normal (instead of monolithic) goals in that domain, and
the experiment directory contains an overall summary of the results.


Notes
=====
In the Assembly domain, some of the IPC1 tasks (#7, #12, #13, #14,
#19, #27) have bugs: Undefined objects are used in the initial state.
For these experiments, these were repaired by removing the offending
lines from the initial state.



[1] A configuration is considered better than another in a given
domain if it solves more tasks from that domain within the timeout. If
several configurations solve an identical number of tasks, average
solution time ranks are considered as a tie-breaking criterion. For
example, a configuration which is second-fastest for 70% of the tasks
it solves and fourth-fastest for 30% of the tasks it solves has an
average rank of 0.7 * 2 + 0.3 * 4 = 2.6, which is better than a
configuration which is third-fastest for all tasks of the domain
(average rank 3).


[2] If focused iterative-broadening search is the best configuration,
the second-best configuration is considered instead, because deferred
heuristic evaluation is only relevant for the best-first search
configurations.
