Temporal Decision Trees: Model-based Diagnosis of Dynamic Systems On-Board

The automatic generation of decision trees based on off-line reasoning on models of a domain is a reasonable compromise between the advantages of using a model-based approach in technical domains and the constraints imposed by embedded applications. In this paper we extend the approach to deal with temporal information. We introduce a notion of temporal decision tree, which is designed to make use of relevant information as long as it is acquired, and we present an algorithm for compiling such trees from a model-based reasoning system.


Introduction
The embedding of software components inside physical systems became widespread in the last decades due to the convenience of including electronic control into the systems themselves. This phenomenon occurs in several industrial sectors, ranging from large-scale products such as cars to much more expensive systems like aircraft and spacecrafts.
The case of automotive systems is paradigmatic. In fact, the number and complexity of vehicle subsystems which are managed by software control increased significantly since the mid 80s and will further increase in the next decades (see Foresight-Vehicle, 2002), due to the possibility of introducing, at costs that are acceptable for such wide scale products, more flexibility in the systems, for e.g. increased performance and safety, and reduced emissions. Systems such as fuel injection control, ABS (to prevent blockage of the wheels while braking), ASR (to avoid slipping wheels), ESP (controlling the stability of the vehicle), would not be possible at feasible costs without electronic control.
The software modules are usually installed on dedicated Electronic Control Units (ECUs) and they play a very important role since they have complete control of a subsystem: human "control" becomes simply an input to the control system, together with inputs from appropriate sensors. For example, the position of the accelerator pedal is an input to the ECU which controls fuel delivery to the injectors.
A serious problem with these systems is that the software must behave properly also in presence of faults and must guarantee high levels of availability and safety for the controlled system and for the vehicle. The controlled systems, in fact, are in many cases safety critical: the braking system is an obvious example. This means that monitoring the systems behaviour, detecting and isolating failures, performing the appropriate recovery actions is a critical task that must be performed by control software. If any problem is detected or suspected the software must react, modifying the way the system is controlled, with the primary goal of guaranteeing safety and availability. According to recent estimates, about 75% of the ECU software deals with detecting problems and performing recovery actions, that is to the tasks of diagnosis and repair (see again Foresight-Vehicle, 2002).
Thus the design of the diagnostic software is a very critical and time consuming activity, which is currently performed manually by expert engineers who use their knowledge to perform the "Failure Mode and Effect Analysis (FMEA)" 1 and define diagnostic and recovery strategies.
The problem is complex and critical per-se, but it is made even more difficult by a number of other issues and constraints that have to be taken into account: • The resources that are available on-board must be limited, in terms of memory and computing power, to keep costs low. This has to be combined with the problem that near real time performance is needed, especially in situations that may be safety critical. For example, for direct injection fuel delivery systems, where fuel is maintaned at a very high pressure (more than 1000 bar) there are cases where the system must react to problems within a rotation of the engine (e.g. 15 milliseconds at 4000 rpm), to prevent serious damage of the engine and danger to passengers. In fact, a fuel leakage can be very dangerous if it comes from a high pressure line. In this case it is important to distinguish whether a loss of pressure is due to such a leak, in order to activate some emergency action (for example, stop the engine), or to some other failure which can simply be signalled to the user.
• In order to keep costs acceptable for a large scale product, the set of sensors available on board is usually limited to those necessary for controlling the systems under their correct behaviour; thus, it is not always easy to figure out the impact that faults may have on the quantities monitored by the sensors, whose physical, logical and temporal relation to faults may be not straightforward.
• The devices to be diagnosed are complex from the behavioural point of view: they have a dynamic and time-varying behaviour; they are embedded in complex systems and they interact with other subsystems; in some cases the control system automatically compensates deviations from the nominal behaviour.
These aspects make the design of software modules for control and diagnosis very challenging but also very expensive and time consuming. There is then a significant need for improving this activity, making it more reliable, complete and efficient through the use of automated systems to support and complement the experience of engineers, in order to meet the growing standards which are required for monitoring, diagnosis and repair strategies.
Model-based reasoning (MBR) proved to be an interesting opportunity for automotive applications and indeed some applications to real systems have been experimented in the 1. The result of FMEA is a table which lists, for each possible fault of each component of a system, the effect of the faults on the component and on the system as a whole and the possible strategy to detect the faults. This table is compiled manually by engineers, based on experience knowledge and on a blueprint of the system.
90s (e.g., Cascio & Sanseverino, 1997;Mosterman, Biswas, & Manders, 1998;Sachenbacher, Malik, & Struss, 1998;Sachenbacher, Struss, & Weber, 2000). The type of models adopted in MBR are conceptually not too far away from those adopted by engineers. In particular, the component oriented approach typical of MBR fits quite well with the problem of dealing with several variants of systems, assembled starting from the same set of basic components. For a more thorough discussion of the advantages of the MBR approach, see Console and Dressler (1999).
Most of the applications developed so far, however, concentrated on off-board diagnosis, that is diagnosis in the workshop, and not on on-board diagnosis. The case of on-board systems seems to be more complicated since, due to the restrictions on hardware to be put on-board, it is still questionable if diagnostic systems can be designed to reason on first-principle models on-board. For this reason other approaches have been developed in order to exploit the advantages of MBR also in on-board applications. In particular, a compilation-based scheme to the design of on-board diagnostic systems for vehicles was experimented in the Vehicle Model Based Diagnosis (VMBD) BRITE-Euram Project (1996-99), and applied to a Common-rail fuel injection system (Cascio, Console, Guagliumi, Osella, Sottano, & Theseider, 1999). In this approach a model-based diagnostic system is used to generate a compact on-board diagnostic system in the form of a decision tree. Similarly, automated FMEA reports generated by Model-Based Reasoning in the Autosteve system can be used to generate diagnostic decision trees (Price, 1999). Yet similar is the idea proposed by Darwiche (1999), where diagnostic rules are generated from a model in order to meet resource constraints.
These approaches have interesting advantages. On the one hand, they share most of the benefits of model-based systems, such as relying on a comprehensive representation of the system behaviour and a well defined characterization of diagnosis. On the other hand, decision trees and other compact representations make sense for representing onboard diagnostic strategies, being efficient in space and time. Furthermore, algorithms for synthesizing decision trees from examples are well established in the machine learning community. In this specific case the examples are the solutions (diagnoses and recovery actions) generated by a model-based system. However, the basic notion of decision tree and the approaches for learning such trees from examples have a major limitation for our kind of applications: they do not cope properly with the temporal behaviour of the systems to be diagnosed, and, in particular, with the fact that incremental discrimination of possible faults, leading to a final decision on an action to be taken on-board, should be based on observations acquired across time, thus taking into account temporal patterns.
For such a reason, in the work described in this paper we introduce a new notion of decision tree, the temporal decision tree, which takes into account the temporal dimension, and we introduce an algorithm for synthesizing temporal decision trees.
Temporal decision trees extend traditional decision trees in the fact that nodes have a temporal label which specifies when a condition should be checked in order to select one of the branches or to make a decision. As we shall see, this allows taking into account that in some cases the order and the delay between observable measures influences the decision to be made and thus provides important power to improve the decision process. Waiting, however, is not always possible and thus the generation of the trees includes a notion of deadline for each possible decision. Thus, the temporal decision process supports the possibility of selecting the best decision, exploiting observations and their temporal locations (patterns) and taking into account that in some cases at some point a decision has to be taken anyway to prevent serious problems.
The rest of the paper is organized as follows. In section 2 we summarize some basic ideas about model-based diagnosis (MBD), the use of decision trees in conjunction with it, and the temporal dimension in MBD and in decision trees. In section 3 we provide basic formal definitions about decision trees, which form the basis for their extension to temporal decision trees in section 4. We then describe (section 5) the problem of synthesizing temporal decision trees and our solution (section 6).

Model-based Diagnosis and Decision Trees
In this section we briefly recall the basic notions of model-based diagnosis and we discuss how decision trees can be used for diagnostic purposes, focusing on how they have been used in VMBD in conjunction with the model-based approach (Cascio et al., 1999).

The Atemporal Case
First of all let us sketch the atemporal case and the "traditional" use of diagnostic decision trees.

Atemporal model-based diagnosis
The starting point for model-based diagnosis is a model of the structure and behaviour of the device to be diagnosed. More specifically, we assume a component centered approach in which: • A model is provided for each component type; a component is characterized by -A set of variables (with a distinguished set of interface variables); -A set of modes, including an ok mode (correct behaviour) and possibly a set of fault modes.
-A set of relations involving component variables and modes, describing the behaviour of the component in such a mode. These relations may model the correct behaviour of the device and, in some cases, the behaviour in presence of faults (faulty behaviour).
• A model for the device is given as a list of the component instances and of their connections (connections between interface variables).
In the Artificial Intelligence approach, models are usually qualitative, that is the domain of each variable is a finite set of values. Such an abstraction has proven to be useful for diagnostic purposes. The model can be used for simulating the behaviour of a system and then for computing diagnoses. In fact, given a set of observations about the system behaviour, diagnoses can be determined after comparing the behaviour predicted by the model (in normal conditions or in the presence of single or multiple faults) and the observed behaviour.
In order for the model to be useful for on-board diagnosis, for each fault mode F (or combination of fault modes) the model must include the recovery action the control software should perform in case F occurs. In general these actions have a cost, mainly related to the resulting reduction of functionality of the system. Moreover, two actions a 1 and a 2 can be related in the following sense: • a 1 , the recovery action associated with fault F 1 , carries out operations that include those performed by a 2 , the recovery action associated with F 2 ; • a 1 can be used as a recovery action also when F 2 occurs; it may however carry out unneeded operations, thus reducing the system functionality more than strictly necessary.
• However, in case we cannot discriminate between F 1 and F 2 , applying a 1 is a rational choice.
In section 4.3 we will present a model for actions which formalizes this relation. Thus the main goal of the on-board diagnostic procedure is to decide the best action to be performed, given the observed malfunction. This type of procedure can be efficiently represented using decision trees.

Decision trees
Decision trees can be used to implement classification problem solving and thus some form of diagnostic procedure. Each node of the tree corresponds to an observable. In on-board diagnosis, observables correspond either to direct sensor readings, or to the results of computations carried out by the ECUs on the available measurements. In the following the word "sensor" will denote both types of observables; it is worth pointing out that the latter may require some time to be performed. In this paper we mainly assume that a sensor reading takes no time; however the apporach we propose deals also with the case in which a sensor reading is time consuming, as pointed out in section 5.1. A node can have as many descendants as the number of qualitative values associated with the sensor. The leaves of the tree correspond to actions that can be performed on board. Thus, given the available sensor readings, the tree can be very easily used to make a decision on the recovery action to be performed.
Such decision trees can be generated automatically from a set of examples or cases. By example here we mean a possible assignment of values to observables and the corresponding diagnosis, or possible alternative diagnoses, and a selected recovery action which is appropriate for such a set of suspect diagnoses. This set can be produced using a modelbased diagnostic systems, which, given a set of observables can compute the diagnoses and recovery actions.
In the atemporal case, with finite qualitative domains, the number of possible combinations of observations is finite, and usually small, therefore considering all cases exhaustively (and not just a sample) is feasible and there are two equivalent ways of building such an exhaustive set of cases: 1. Simulation approach: for each fault F , we run the model-based system to predict the observations corresponding to F .

2.
Diagnostic approach: we run a diagnosis engine on combinations (all relevant combinations) of observations, to compute the candidate diagnoses for each one of these cases.
In either case, the resulting decision tree contains the same information as the set of cases; if, once sensors are placed in the system, observations have no further cost, the decision tree is just a way to save space with respect to a table, and speed up lookup of information.
In this way the advantages of the model-based approach and of the use of compact decision trees can be combined: the model-based engine produces diagnoses based on reusable component models and can be used as the diagnoser off-board; compact decision trees, synthesized from cases classified by the model-based engine, can be installed on-board.

Towards Temporal Decision Trees
In this section we briefly recall some basic notions on temporal model-based diagnosis (see Brusoni, Console, Terenziani, & Theseider Dupré, 1998 for a general discussion on temporal diagnosis), and we informally introduce temporal decision trees.

Temporal MBD
The basic definition of MBD is conceptually similar to the atemporal case. Let us consider the main differences.
As regards the model of each component type we consider a further type of variable: state variables used to model the dynamic behaviour of the component. The set of relations describing the behavior of the component (for each mode) is augmented with temporal information (constraints); we do not make specific assumptions on the model of time, even though, as we shall see in the following, this has an impact on the cases which can be considered for the tree generation. As an example, these constraints may specify a delay between an input and an output or in the change of state of the component.
As regards recovery actions, a deadline for performing the action must be specified; this represents the maximum time that can elapse between fault detection and the recovery action; this is the amount of time available for the control software to perform discrimination. This piece of information is specific to each component instance, rather than component type, because the action and the deadline are related to the potential unacceptable effects that a fault could have on the overall system; the same fault of the same component type could be very dangerous for one instance and tolerable for another.
Diagnosis is started when observations indicate that the system is not behaving correctly. Observations correspond to (possibly qualitative) values of variables across time. In general, in the temporal case a diagnosis is an assignment of a mode of behaviour to component instances across time such that the observed behaviour is explained by the assignment given the model. For details on different ways of defining explanation in this case see Brusoni et al. (1998). For the purposes of this paper we are only interested in the fact that, given a set of observables, a diagnosis (or a set of candidate diagnoses if no complete discrimination is possible) can be computed and a recovery action is determined.
This means that the starting point of our approach is a table containing the results of running the model-based diagnostic system on a set of cases, (almost) independently of the model-based diagnostic system used for generating the table.
We already mentioned that in the static case, with finite qualitative domains, an exhaustive set of cases can be considered. In the temporal case, if the model of time is purely qualitative, a table with temporal information cannot be built by prediction, while it can be built running the diagnosis engine on a set of cases with quantitative information: diagnoses which make qualitative predictions that are inconsistent with the quantitative information can be ruled out. Of course, this cannot in general be done exhaustively, even if observations are assumed to be acquired at discrete times; if it is not, the decision tree generation will actually be learning from examples.
Thus a simulation approach can only be used in case the temporal constraints in the model are precise enough to predict at least partial information on the temporal location of the observations, e.g., in case the model includes quantitative temporal constraints.
The diagnostic approach can be used also in case of weaker (qualitative) temporal constraints in the model.
As regards the observations, we consider the general case where a set of snapshots is available; each snapshot is labelled with the time of observation and reports the value of the sensors (observables) at that time. This makes the approach suitable for different notions of time in the underlying model and on the observations (see again the discussion in Brusoni et al., 1998). Figure 1. Each row of the table corresponds to a situation (case or "example" in the terminology of machine learning) and it reports:

Example 1 The starting point for generating a temporal decision tree is a table like the one in
• For each sensor s i the values that have been observed at each snapshots (in the example we have 8 snapshots, labelled as 0 to 7); n, l, h and v correspond to the qualitative values of the sensor measurements; n for normal, l for low, h for high, v for very low and z for zero.
• The recovery action Act to be performed in that situation.
• The deadline Dl for performing such an action.
A table as the one in the above example represents a set of situations that may be encountered in case of faults and, as noticed above, it can be generated using either a diagnostic or a simulation approach. In the next section we shall introduce the notion of temporal decision trees and show how the pieces of information about sensor histories like those in the table above can be exploited in the generation of such trees.

Introduction to temporal decision trees
Traditional decision trees do not include a notion of time, i.e., the fact that data may be observable at different times or that different faults may be distinguished only by the temporal patterns of data. Thus, neglecting the notion of time may lead to limitations in the decision process.
For such a reason in this work we introduce a notion of temporal decision tree. Let us analyse the intuition behind temporal decision trees and the decision process they support. Formal definitions will be provided later in the paper.
Let us consider, for example, the fault situations sit 3 and sit 4 in Figure 1, and let us assume, for the sake of simplicity, that the only available sensor is s 2 . The two fault situations have to be distinguished in the control software because they require different recovery actions. Both of them can be detected by the fact that s 2 shows a low value. Moreover, in both situation after a while s 2 starts showing a very low value.
The only way to discriminate these two situations is to make use of temporal information, that is to exploit the fact that in sit 3 value v shows up after 4 time units from fault detection, while in sit 4 the same value shows up after 6 time units.
In order to take into account this in a decision tree, we have to include time into the tree. In both examples, the best decision procedure is to wait after observing thst s 2 = l (that is, after dectecting that a fault has occurred). After 4 time units we can make a decision, depending on whether s 2 = v or not. This corresponds to the procedure described by the tree in Figure 2. Obviously, waiting is not always the solution or is not always possible. In many cases, in fact, safety or other constraints may impose some deadlines for performing recovery actions. This has to be reflected in the generation of the decision procedure. Suppose, in the example above, that the deadline for sit 3 was 3 rather than 6: in this case the two situations would have been indistinguishable, beacause it would have been infeasible to wait 4 time units.
Thus, an essential idea for generating small decision trees in the temporal case is to take advantage of the fact that in some cases there is nothing better than waiting 2 , in order to get a good discrimination, provided that safety and integrity of the physical system are kept into account, and that deadlines for recovery actions are always met.
More generally, one should exploit all the information about temporal patterns of observables and about deadline, like the ones in Figure 1, to produce an optimal diagnostic procedure.
Another idea we use in our approach is the integration of incremental discrimination, which is the basis for the generation and traversal of a decision tree, with the incremental acquisition of information across time.
In the atemporal case, the decision tree should be generated in order to guide the incremental acquisition of information: different subtrees of a node are relative to different sets of faults and therefore may involve different measurements: in a subtree T we perform the measurements that are useful for discriminating the faults compatible with the measurements that made us reach subtree T, starting from the root. For off-board diagnosis, this allows reducing the average number of measurements to get to a decision (i.e. the average depth of the tree), which is useful because measurements have a cost -e.g. time for an operator to take them from the system; for on-board diagnosis, even in case all measurements are simply sensor readings, which have no cost once the sensor has been made part of the system, we are interested in generating small decision trees to save space.
In the temporal case there is a further issue: the incremental acquisition of information is naturally constrained by the flow of time. If we do not want to store sensor values across time -which seems a natural choice since we have memory constraints -information must be acquired when it is available and it is not possible to read sensors once the choice of waiting has been made. This issue will be taken into account in the generation of temporal decision trees.

Basic Notions on Decision Trees
Before moving to a formal definition of temporal decision trees, in this section we briefly recall some definitions and algorithms for the atemporal case. In particular, we recall the standard ID3 algorithm (Quinlan, 1986), which will be the basis for our algorithm for the temporal case. The definitions in this section are the standard ones (see any textbook on Artificial Intelligence, e.g., Russel & Norvig, 1995).

Decision Trees
We adopt the following formal definition of decision tree, which will be extended in section 4.1 to temporal decision trees.
Definition 1 Let us consider a decision process P where A is the set of available decisions, O is the set of tests that can be performed on the external environment, and out(o i ) = 2. A different approach would be that of weighing the amount of elapsed time agains the possibility of better discriminating faults; such an approach is something we are considering in the future work on temporal decision trees, as outlined in section 7.
A decision tree for P is a labelled tree structure T = r, N, E, L where: • r, N, E is a tree structure with root r, set of nodes N and set of edges E ⊆ N × N ; N is partitioned in a set of internal nodes N I and a set of leaves N L .
• L is a labelling function defined over N ∪ E.
• If n ∈ N I , L(n) ∈ O; in other words each internal node is labelled with the name of a test.
• If (n, c) ∈ E then L((n, c)) ∈ out(L(n)); that is, an edge directed from n to c is labelled with a possible outcome of the test associated with n.
• Moreover, if (n, c 1 ), (n, c 2 ) ∈ E and L((n, c 1 )) = L((n, c 2 )) then c 1 = c 2 , and for each v ∈ out(L(n)) there is c such that (n, c) ∈ E and L((n, c)) = v; that is, n has exactly one outgoing edge for each possible outcome of test L(n).
• If l ∈ N L , L(l) ∈ A; in other words each leaf is labelled with a decision.
When the decision-making agent uses the tree, it starts from the root. Every time it reaches an inner node n, the agent performs test L(n), observes its outcome v and follows the vlabelled edge. When the agent reaches a leaf l, it makes decision a = L(l). Figure 3 shows a generic recursive algorithm that can be used to build a decision tree starting from a set of Examples and a set of Tests.

Building Decision Trees
Recursion ends when either the remaining examples do not need further discrimination because they all correspond to the same decision, or all available observables have been used, and the values observed match cases with different decisions. In the latter case observables are not enough for getting to the proper decision and if an agent is actually using the tree, it should take into account this.
In case no terminating condition holds, the algorithm chooses an observable variable test to become the root label for subtree T. Depending on how ChooseTest is implemented we get different specific algorithms and different decision trees.
A subtree is built for each possible outcome value of test in a recursive call of BuildTree, with sets Tests Update and SubExamples as inputs. Tests Update is obtained by removing test from the set of tests, in order to avoid using it again. SubExamples is the subset of Examples containing only those examples that have value as outcome for test.
As mentioned before, there are as many specific algorithms, and, in general, results, as there are implementations of ChooseTest. It is in general desirable to generate a tree with minimum average depth, for two reasons: • Minimizing average depth means minimizing the average number of tests and thus speeding up the decision process.
• In machine learning, a small number of tests also means a higher degree of generalization on the particular examples used in building the tree.

ID3
Unfortunately, finding a decision tree with minimum average depth is an intractable problem; however, there exists a good best-first heuristic for choosing tests in order to produce trees that are "not too deep". This heuristic was proposed in the ID3 algorithm (Quinlan, 1986), and is base on the concept of entropy from information theory. In the following we recall this approach in some detail also in order to introduce some notation which will be used in the rest of the paper.
Definition 2 Given a (discrete) probability distribution P = {p 1 , . . . , p n } its entropy E(P) is: (1) E | a = {e ∈ E | the decision associated with example e is a}.

If the set of available decisions is
Definition 3 For each a i ∈ A, i = 1, . . . , n, we define the probability of a i with respect to E as follows: It is worth noting that, if examples are endowed with their a priori probabilities, we can redefine P (a i ; E) in order to take them into account. The basic formulation of ID3 assumes however that all examples are equiprobable and it computes the probability distribution from the frequencies of the examples.
The entropy of E is: If all decisions are equiprobable, we get E(E) = log 2 n, which is the maximum degree of disorder for n decisions. If all decisions but one have probability equal to 0, then E(E) = 0: the degree of disorder is minimum. Entropy is used as follows for test selection. A test o with possible outcomes v 1 , . . . , v k , splits the set of examples into: In particular, if while building the tree we choose test o, E | o→v j is the subset of examples we use in building the subtree for the child corresponding to v j . The lowest the degree of disorder in E | o→v j , the closer we are to a leaf. Therefore, following equation (2): Finally, we define the entropy of a test o as the average entropy on its possible outcomes:

Definition 4 The entropy of a test o with respect to a set of examples E is:
The ID3 algorithm simply consists of choosing the test with lowest entropy. Figure 4 shows the implementation of ChooseTest that yields ID3.

Extending Decision Trees
In this section we formally introduce the notion of temporal decision tree, and we show how timing information can be added to the set of examples used in tree building. Moreover we introduce a model for recovery action that expresses information needed by the algorithm.

Temporal Decision Trees
In section 2.2.2 we motivated the monotonicity requirement for temporal decision trees, so that their traversal requires information relative to increasing time and then no information has to be stored.
We now discuss how temporal information is actually included in the tree and matched with temporal information on the observations. The tree has to be used when some abnormal value is detected for some sensor (fault detection). We then intend the time of fault detection as the reference time for temporal labels of observations in the tree. If we look for example at the data shown in Figure 1, we see that, for every fault situation, there is always at least one sensor whose value at time 0 is different than nnormal. The reason is that, for each fault situation, we associate a 0 time label to the first snapshot in which there is a sensor showing some deviation from nominal behaviour.
The following definition provides the extension for the temporal dimension in decision trees.
Definition 5 A temporal decision tree is a decision tree r, N I , N L , E, L endowed with a time-labelling function T such that: (1) T : N I → IR + ; we call T (n) a time label; (2) if n ∈ N I and there exist n such that (n, n ) ∈ E (in other words, n is child of n), then T (n ) ≥ T (n).
Since we assume not to store any information, but rather to use information for traversing the tree as dictated by the tree itself, a first branching for discrimination is provided depending on which is the sensor that provided such a value. We then assume to have different temporal decision trees, one for each sensor which could possibly provide the first abnormal value, or, alternatively, that the root node has no time label, and the edges from the root are not labelled with different values of a single observable, but with different sensors which could provide the first abnormal observation. Each tree, or subtree in the second alternative, can be generated independently of the other ones, only using the examples where the sensor providing fault detection is the same. This generation is what will be described in the rest of the paper.
Let us tree how a temporal decision tree (or forest, in the case of multiple detecting sensors) can be exploited by an on-board diagnostic agent in order to choose a recovery action. When the first abnormal value is detected, the agent activates a time counter and starts visiting the appropriate tree from the root. When it reaches an inner node n, the agent retrieves both the associated test s = L(n) and the time label t = T (n). Then it waits until the time counter reaches t, performs test s and chooses one of the child nodes depending on the outcome. When the agent reaches a leaf, it performs the corresponding recovery action.
With respect to the atemporal case, the agent has now the option to wait. From the point of view of the agent it may seem pointless to wait when it could look at sensor values, since reading sensor values has no cost. However, from the point of view of the tree things are quite different: we do not want to add a test that makes the tree deeper and at the same time is not necessary.
Condition (2) states that the agent can only move forward in time. This corresponds to the assumption that sensor readings are not stored, discussed in section 2.2.2.
Example 2 Let us consider the diagnostic setting described in example 1. Figure 5 shows a temporal decision tree for such setting, that is a temporal decision tree that uses the sensors and recovery actions mentioned in Figure 1. If such a tree is run on the fault situations in the table, a proper recovery action is selected within the deadline.

Adding Timing Information to the Set of Examples
In order to generate temporal decision trees, we need temporal information in the examples. We already introduced informally the notion of a set of examples (or "fault situations") with temporal information when describing the table in Figure 1. The following definition formalizes the same notion.
Definition 6 A temporal example-set (te-set for short) E is a collection of fault situations sit 1 , . . . , sit n characterized by a number of sensors sens 1 , . . . , sens m and an ascending sequence of time labels t 1 < . . . < t last representing the instants in time for which sensor readings are available. In this context we call observation a pair sens i , t j . A te-set is organized in a table as follows: (1) The table has n rows, one for each fault situation. (2) The table has m × last observation columns containing the outcomes of the different observations for each fault situation. We denote by Val(sit h , sens i , t j ) the value measured by sensor sens i at time t j in fault situation sit h .
(3) The table has a distinguished column Act containing the recovery action associated with each fault situation. We denote by Act(sit h ) the recovery action associated with sit h .
(4) The table has a second distinguished column Dl containing the deadline for each fault situation. We denote by Dl(sit h ) the deadline for sit h , and we have that Dl(sit h ) ∈ {t 1 , . . . , t last } for each h = 1, . . . , n. We define a global deadline for a te-set S as Dl(S) = min{Dl(sit) | sit ∈ S}.
We moreover assume that a probability 4 P (sit; E) is associated with each sit ∈ E , such that sit∈E P (sit; E) = 1. For every E ⊂ E and for every sit ∈ E we introduce the following notation: P (E ; E) = sit∈E P (sit; E) and P (sit; E ) = P (sit;E) P (E ;E) .

A Model for Recovery Actions
The algorithms we shall introduce require a more detailed model of recovery actions. In particular we want to better characterize what happens when it is not possible to uniquely identify the most appropriate recovery action. Moreover, we want to quantify the loss we incur in when this happens.
We start with a formal definition: Definition 7 A basic model for recovery actions is a triple A, ≺, χ where: (1) A = {a 1 , . . . , a K } is a finite set of symbols denoting basic recovery actions.
(2) ≺⊆ A × A is a partial strict order relation on A. We say that a i is weaker than a j , written as a i ≺ a j , if a j produces more recovery effects than a i , in the sense that a j could be used in place of a i (but not the vice versa). We therefore assume that there are no drawbacks in actions, that is any action can be performed at any time with no negative consequences, apart from the cost of the action itself (see below). This is clearly a limitation and something to be tackled in future work (see the discussion in section 7).
(3) χ : A → IR + is the cost function, and is such that if a i ≺ a j then χ(a i ) < χ(a j ). χ associates a cost with each basic recovery action, expressing possible drawbacks of the action itself. Recovery actions performed on-board usually imply a performance limitation or the abortion of some ongoing activity; costs are meant to estimate monetary losses or inconveniences for the users resulting from these. The requirement of monotonicity with respect to ≺ stems from the following consideration: if a i ≺ a j and χ(a i ) ≥ χ(a j ) it would not make any sense to ever perform a i , since a j could be performed with the same effects at the same (or lower) cost. We moreover assume that costs are independent from the fault situation (a consequence of the no-drawbacks assumption mentioned in the previous point).
Example 3 Let us consider again the four recovery actions a, b, c, d that appear in the teset of Figure 1. Figure 6 shows a basic action model for them. The graph expresses the oredering relation ≺, while costs are shown next to action names. We have seen in the previous section that with each fault situation is associated a recovery action; usually this association depends on the fault, but it may also depend on the operating conditions in which the fault occurs.
What happens when we cannot discriminate multiple fault situations? In section 3.2, while outlining the generic algorithm for the atemporal case, we referred the solution to the decision-making agent. In this case we want to be more precise.
Definition 8 Let A, ≺, χ be a basic model for recovery actions. We define the function merge : 2 A → 2 A as follows: We moreover define: is the set of compound recovery actions which includes basic recovery actions in the form of singletons.
The intuition behind merge is that when we cannot discriminate multiple fault situations we simply merge the corresponding recovery actions. This means that we collect all recovery actions, and then remove the unnecessary ones (equation (4)). An action in a set becomes unnecessary when the set contains a stronger action. Thus given a te-set E, we define: If we take into account compound actions, we can extend the notion of model for recovery actions as follows: (2) χ ext : merge-set(A) → IR + is a cost function over compound actions such that for While ≺ ext is uniquely determined by ≺, the same does not hold for χ ext : for this reason there is more than one extended model for any basic model. The requirement that max a∈A χ(a) ≤ χ ext (A) is motivated as follows: if there existed a ∈ A such that χ ext (A) ≤ χ(a) then it would make sense never to perform a, substituting it for A. In fact, {a} ≺ ext A and A would have the same or lower cost. We also ask χ ext -as we did for basic modelsto be monotonic with respect to ≺ ext . In the following we shall consider only extended models for recovery actions, thus we shall drop the ext prefix from both ≺ and χ. Figure 7 shows a possible extension for the basic model in Figure 6. In this case the cost of the compound action {b, c} is given by the sum of the individual costs of b and c.

The Problem of Building Temporal Decision Trees
In this section we outline the peculiarities of building temporal decision trees, showing the differences with respect to the "traditional" case.

The Challenge of Temporal Decision Trees
What makes the generation of temporal decision trees more difficult from standard ones is the requirement that time labels do not decrease when moving from the root to the leaves: this corresponds to assuming that sensor values cannot be stored; when the decision-making agent decides to wait it gives up using all values that sensors show while it is waiting.
If we release this restriction we can actually generate temporal decision trees with a minor variation of ID3, essentially by considering each pair formed by a sensor and a time label as an individual test. In other words ID3ChooseTest selects a sensor s and a time label t such that reading s at time t allows for the maximum discrimination among examples.
However in systems as the ones we are considering, that is low-memory real-time systems, the possibility of performing the diagnostic task without discarding dynamics but also without having to store sensor values across time is a serious issue to take into account. For this reason the definition of temporal decision tree includes the requirement that time labels be not decreasing on root-leaf paths. Figure 8 shows a generic algorithm for building temporal decision trees that can help us outline the difficulties of the task. Line 8 shows a minor modification aimed at taking into account deadlines: an observation can be used on a given set of examples only if its time label does not exceed its global deadline. Violating this condition would result in a tree that selects a recovery action only after the deadline for the corresponding fault situation has expired.
The major change with respect to the standard algorithm is however shown in line 15: once we select an observation pair sensor, tlabel we must remove from the set of observations all those pairs whose time label is lower than tlabel 5 .
As a consequence of these operations -ruling out invalid observations and discarding those that are in the past -the set of observations available when building a child node can be different from the one used for its parent in more than one way: 5. Actually this assumes that reading a sensor and moving downwards the tree accordingly can be done so swiftly that the qualitative sensor values have no time to change in the meanwhile. If this is not the case, one can choose to remove also the pairs with time labels equal to tlabel, or more generally those with time labels lower than tlabel + k where k is the time needed by the diagnostic agent to carry out tree operations. For the sake of simplicity, however, in the following we will assume that k is 0, since the choice of k does not affect the approach we propose.  • Some observations can become unavailable for the child because they have a time label lower than that used for the parent.
Of course the problematic issue is the latter: some observations are lost, and among them there may be some information which is necessary for properly selecting a recovery action. Let us consider as an example the te-set in Figure 9, with four fault situations, two time labels (0 and 1) and only one sensor (s). Each fault situation is characterized by a different recovery action, and the te-set obviously allows to discriminate all of them. However the entropy criterion would first select the observation s, 1 , which is more discriminating.
y y c sit 4 y z d Figure 9: A te-set causing some problems to standard ID3 algorithm if used for temporal decision trees.
The observation s, 0 would then become unavailable, and the resulting tree could never discriminate sit 2 and sit 3 . This shows that there is a relevant difference between building standard decision trees and building temporal decision trees. Let us look again at the generic algorithm for standard decision trees presented in Figure 3: the particular strategy implemented in ChooseTest does not affect the capability of the tree of selecting the proper recovery action, but only the size of the tree. Essentially the tree contains the same information as the set of examplesat least for what concerns the association between observations and recovery actions. We can say that the tree has always the same discriminating power as the set of examples, meaning that the only case when the tree is not capable of deciding between two recovery actions is when the set of examples contains two fault situations with identical observations and different actions.
If we consider the algorithm in Figure 8 we see that the order in which observations are selected -that is, the particular implementation of ChooseObs -can affect the discriminating power of the tree, and not only its size. Since from one recursive call to the following some observations are discarded, we can obtain a tree with less discriminating information than the original set of examples. Our primary task is then to avoid such a situation, that is to build a tree which is small, but which does not sacrifice relevant information. As a consequence, we cannot exploit the strategy of simply selecting an observation with minimum entropy.
In the next sections we shall formalize the new requirements for the output tree, and propose an implementation of ChooseObs which meets them.

Each Tree Has a Cost
In the previous section we informally introduced the notion of discriminating power. In this section we shall introduce a more general notion of expected cost of a temporal decision tree. Intuitively, the expected cost associated with a temporal decision tree is the expected cost of a recovery action selected by the tree, with respect to the probability distribution of the fault situations.
Expected cost is a stronger notion than discriminating power: on the one hand if a tree discriminates better than another, than it has also a lower expected cost (we shall soon prove this statement). On the other hand expected cost adds something to the notion of discriminating power, since any two trees are comparable from the point of view of cost, while they may not be from the point of view of the discrimination they carry out.
Before defining expected cost, we need to introduce some preliminary definitions. We shall make use of a function, named examples, that given an initial set of examples E and a tree T associates to each node of the tree a subset of E. To understand the meaning of such a function before formally defining it, let us imagine to run the tree on E and at a certain point of the decision process to reach a node n: examples tells us which is the subset of fault situations which we have not yet discarded.
Definition 10 Let E be a te-set with sensors s 1 , . . . , s m , time labels t 1 , . . . , t last and actions model A, ≺, χ . Moreover let T = r, N, E, L, T be a temporal decision tree such that for every internal node n ∈ N we have L(n) ∈ {s 1 , . . . , s m } and T (n) ∈ {t 1 , . . . , t last }. We define a function examples(·; E) : N → 2 E as follows: examples is well defined since for any n ∈ N different from the root there exists exactly one p ∈ N such that (p, n) ∈ E.

Notice that, if E is the set of examples used for building T, examples(n; E) corresponds to the subset of examples used while creating node n.
Example 5 Let us consider Figure 10: it shows the same tree as Figure 5, but for every node n we can also see the set of fault situations examples(n; E), where E is the te-set of Figure 1.  In the following when using function examples we shall omit the second argument, denoting the initial te-set, when there is no ambiguity about which te-set is considered.
Not every tree can be used on a given set of examples: actually we need some compatibility between the two, which is characterized by the following definition.
Definition 11 Let E be as in previous definition. We say that a temporal decision tree T = r, N, E, L, T is compatible with E if: For each sit ∈ E we then denote by leaf T (sit) the unique leaf l of T such that sit ∈ examples(l). We are now ready to formalize the notion of discriminating power.
Definition 13 Let T = r T , N T , E T , L T , T T , U = r U , N U , E U , L U , T U denote two temporal decision trees compatible with the same te-set E. Let moreover A, ≺, χ be the recovery action model used in building T and U. We say that T is more discriminating than U with respect to E if: (1) for every sit ∈ E either L T (leaf T (sit)) ≺ L U (leaf U (sit)) or L T (leaf T (sit)) = L U (leaf U (sit)); (2) there exists sit ∈ E such that L T (leaf T (sit)) ≺ L U (leaf U (sit)).
Notice that the second condition makes sure that the two trees are not equal (in which case they would be equally discriminating), something that the first condition alone cannot guarantee.
Example 6 Let us consider the tree in Figure 10 above and the tree in Figure 11 below. The former is more discriminating than the latter. In fact, the two trees associate the same actions to sit 1 , sit 2 , sit 3 , sit 4 , sit 5 and sit 6 . However the former associates action b with sit 7 and action c with sit 8 , while the latter associates to both sit 7 and sit 8 the compound action {b, c}.
Unfortunately we cannot easily use discriminating power -as defined above -as a preference criterion for decision trees. The reason is that it does not define a total order on decision trees, but only a partial one. The following situations may arise: 6. For the sake of readability, all proofs are collected in a separate appendix at the end of the paper.
From the point of view of discriminating power alone, it is reasonable for T and U not to be comparable in the above cases. Nonetheless, there may be a reason for preferring one over the other, and this reason is cost. For example if we consider the second situation, even if L T (sit) and L U (sit) are not directly comparable from the point of view of their strength, one of the two may be cheaper than the other and thus preferable. We therefore introduce the notion of expected cost of a tree.
Definition 14 Let T = r, N, E, L, T be a temporal decision tree compatible with a teset E and an action model A = A, ≺, χ . We inductively define an expected cost function X E,A : N → IR + on tree nodes as follows: where P (L(n) → L((n, c))) is the probability of sensor L(n) showing a value v = L((n, c)) and is given by: The expected cost of T with respect to E and A, denoted by X E,A (T) , is then defined as: (9) X E,A (T) = X E,A (r) where r is the root of T The above definition states that: • The expected cost of a tree leaf l is simply the cost of its recovery action L(l); • The expected cost of an inner node n is given by the weighted sum of its children's expected costs; weight for child c is given by the probability P (L(n) → L((n, c))).
• The expected cost of a temporal decision tree T is the expected cost of its root.
The following proposition states that the weighted sum for computing the expected cost of the root can be performed directly on tree leaves.

Proposition 15
Let T = r, N, E, L, T denote a temporal decision tree, and let l 1 , . . . , l u be its leaves. Then   Figure 11: A temporal decision tree less discriminating than the one in Figure 10.
The next proposition shows that expected cost is monotonic with respect to the "better discrimination" relation, and therefore it is a good preference criterion for temporal decision trees, since a tree with the lowest possible expected cost is the most discriminating one, and moreover it is the cheapest among equally discriminating trees.

Restating the Problem
In the previous section we introduced expected cost as a preference criterion for decision trees. Given this notion, we can restate the problem of building temporal decision tree as that of building a tree with minimum possible expected cost. This section formally shows that the notion of "minimum possible expected cost" is well defined, and more precisely it corresponds to the cost of a tree that exploits all observations in the te-set. The goal can then be expressed as finding a reasonably small tree among those whose expected cost is minimum.
In this section, as well as formalizing the above mentioned notions, we introduce some formal machinery that will be useful in proving the correctness of our algorithm.

Definition 17
Let E denote a te-set. Moreover, let t 1 , . . . , t last denote the time labels of E. We say that sit i , sit j ∈ E are pairwise indistinguishable, and we write sit i ∼ sit j , if for all t i < min{Dl(sit i ), Dl(sit j )} and for all sensors s we have that Val(sit i , s, t i ) = Val(sit j , s, t i ).
As a relation, ∼ is obviously reflexive and symmetric, but it is not transitive. If we consider a sit k with a particularly strict deadline, it might well be that sit i ∼ sit k , sit k ∼ sit j , but sit i sit j . We now introduce a new relation ≈ which is the transitive closure of ∼.

Definition 18
Let E denote a te-set. We say that sit i , sit j ∈ E are indistinguishable, and we write sit i ≈ sit j , if there exists a finite sequence sit k 1 , . . . , sit ku ∈ E such that • sit k 1 = sit i ; • sit ku = sit j ; • for every g = 1, . . . , u − 1, sit kg ∼ sit k g+1 .
≈ is an equivalence relation over E, and we denote by E/≈ the corresponding quotient set. We have the following definition:

Definition 19
Let E be a te-set, with actions model A. The expected cost of E, denoted by X E,A , is defined as: Example 8 Let us consider the te-set E in Figure 1 and the action model A in Figure 7. The only two indistinguishable fault situations in E are sit 2 and sit 5 . Thus we have: Notice that the tree in Figure 10 has the same cost as the te-set, thus its cost is the minimum possible, as we show below. Of course we may still be able to build another smaller tree with the same cost.
Now we need to show that X E,A is actually the minimum possible expected cost for a temporal decision tree compatible with E.
Theorem 20 Let E be a te-set with actions model A. We have that: (ii) For every temporal decision tree T compatible with E, X E,A ≤ X E,A (T). Now we can state more precisely the problem of building a temporal decision tree: Given a te-set E with actions model A, we want to build a temporal decision tree T over E, such that X E,A (T) = X E,A . Moreover, we want to keep the tree reasonably small by exploiting entropy.

The Algorithm
In this section we describe in detail our proposal for building temporal decision trees from a given te-set and action model. We also discuss the complexity of the algorithm we introduce, and give an example of how the algorithm works.

Preconditions
Our goal is now to define an implementation of function ChooseObs such that, once plugged into function BuildTemporalTree, yields a solution to the problem of building temporal decision trees as stated in section 5.3. First however we shall analyze some properties of BuildTemporalTree as defined in Figure 8: this will lead us smoothly to the solution and will help us prove formally its correctness. In order to accomplish this task we need to introduce some notation that allows us to speak about algorithm properties.
Let E be a te-set with fault situations {sit 1 , . . . , sit n }, sensors {s 1 , . . . , s m }, time labels {t 1 , . . . , t last } and action model A. We aim at computing our tree T by executing:   Figure 12 describes ID3ChooseSafeObs, the implementation of ChooseObs we propose. It exploits the properties we have proved in this and the previous section in order to achieve the desired task in an efficient way. Let us examine it in more detail.
ID3ChooseSafeObs (Figure 12) computes the set of safe observations (line 4) and then chooses among them one with minimum entropy (line 5). For what we have proved up to now, such an implementation yields a temporal decision tree with minimum expected cost, and at the same time exploits entropy in order to keep the tree small.
Let us now see how FindSafeObs (also in Figure 12) computes the set of safe observations. Proposition 23 shows that the notion of safeness is tied to time labels rather than to individual observations. First of all FindSafeObs determines the range of valid time labels for the current set of examples (line 12); the lower bound t low is the lowest time label in Obs, and is stored in variable t low, while the upper bound t up is given by the global deadline for Examples, and is stored in variable t up.
Then the idea is to find the maximum safe label t max (variable t max) which allows us to easily build the set of safe observations (line 21).
In order to accomplish this task the following steps have to be performed: • Given the initial te-set E, defined by Examples, Obs and ActModel, compute X E,A .
• For each time label t in the range delimited by t low and t up , consider the te-set E t defined by Examples and by those observations with time label equal or greater than t. Then compute X Et,A .
• As soon as we find a time label t with X Et,A > X E,A , we know that t max is the time label immediately preceding t.
Here the most critical operation (in terms of efficiency) is that of computing the expected cost of each E t , because this involves finding the quotient set E t /≈. In fact, in order to obtain the quotient set, we need to repeatedly partition the te-set with respect to all observations available for it. QuotientSet function (Figure 12) performs precisely this task. It takes in input the current time label tlabel, an initial partition (possibly made of a single block with the entire te-set) and the set of all observations, from which it will select valid ones.
First of all it partitions the input te-set with respect to observations with the current time label (lines 28-31). Then it executes iteratively the following operations: • For each partition block it checks whether the deadline has moved further in time (lines 36-38).
• If so, it partitions again the block and stores the resulting sub-blocks for further examination (lines 39-41).
• If not, the block is part of the Final partition that will be returned (line 42).
In order to simplify the task, we introduce as a data type the extended partition, where each partition block is stored together with the highest time label used in building it. In this way we can easily check if the deadline for the block allows us to exploit more observations or not. Using extended partitions instead of standard ones we need to define a new function, ExtPartition, which works in the same way as the Partition function used in Figure 4, but also records with each block the highest time label used for it.  Notice that QuotientSet needs the whole set of observations (and not only valid ones) to properly compute the result; therefore when BuildTemporalTree calls ID3ChooseSafeObs it must pass as first argument UsefulObs instead of ValidObs.
FindSafeObs exploits QuotientSet to find all quotient sets for all E t , but does so using an efficient approach which we call backward strategy.
First of all, we can notice that the order in which observations are considered does not matter while building a quotient set. Moreover, if t < t , E t /≈ is a refinement of E t /≈; in other words we can obtain it from E t /≈ by simply refining the partition with additional observations. Thus, we can compute all quotient sets at the same time as we compute E/≈. FindSafeObs does exactly so: it computes all quotient sets and their expected cost starting from the last time label t up . Each quotient set is not built from scratch, but as a refinement of the previous one. This is the reason why QuotientSet (and ExtPartition as well) takes as first argument not a single set, but a partition. In this way, all quotient sets are computed with the same operations 8 needed to build E/≈. The next section analyzes in further detail complexity issues.

Complexity
In this section we aim at showing that the additional computations needed in building temporal decision trees do not lead to a higher asymptotical complexity than that we would get by using the standard ID3 algorithm on the same set of examples (we discussed in section 5.1 the circumstances that could make such an approach feasible).
Essentially the difference between the two cases lies in the presence of FindSafe-Obs function. Wherever BuildTree calls ID3ChooseTest, BuildTemporalTree calls ID3ChooseSafeObs, which in turn calls both FindSafeObs and ID3ChooseTest.
Let us compare FindSafeObs and ID3ChooseTest, which are similar in many ways. The former repeatedly partitions the input te-set with respect to every available observation; then it computes entropy for each partition built in such a way. FindSafeObs builds just one partition by exploiting all available observations; in other words instead of using each observation to partition the initial te-set, it exploits it in order to refine an existing partition of the same te-set. Moreover, at each time label it computes the expected cost of the partition built so far. Essentially, if we denote by N S the number of sensors and with N T the number of time labels in the initial partition, we have roughly the following comparison: • N S × N T (number of observations) entropy computations for ID3ChooseTest vs.
N T expected cost computations for FindSafeObs.
• N S × N T partitions of the initial te-set for ID3ChooseTest vs. N S × N T refinements of existing partitions of the initial te-set for FindSafeObs.
Entropy and expected cost can be computed with roughly the same effort: both require retrieving some information for each element of each partition block, and to combine this information in some quite straightforward way. The complexity of this task depends only on the overall number of elements, and not on how they are distributed between different 8. There is a slight overhead due to the need to find which observations should be used at each step.
block of the partition. So even if expected cost is computed most of the times on finer partitions than entropy, the only thing that matters is that both are partitions of the same set, thus involving the same elements. Now let us examine the problem of creating a partition. This involves retrieving a value for each element of each block of the initial partition (which again depends only on the number of elements, and not on the number of blocks of the initial partition) and to properly assign the element to a new partition block depending on the original block and on the new value. The main difference in this case between starting with the whole te-set (creation) or with an initial partition (refinement), is the size of the new blocks that are being created, which are smaller in the second case. Dependent on how we implement the partition data type, this may make no difference, or may take less time for the refinement case. However, it never happens that refinement (corresponding to FindSafeObs function) requires more time than the creation (corresponding to ID3ChooseTest function) of a partition.
Therefore we can claim that FindSafeObs function has the same asymptotic complexity as function ID3ChooseTest. Thus also ID3ChooseSafeObs has the same asymptotic complexity as ID3ChooseTest, and we can conclude that BuildTree has the same asymptotic complexity as BuildTemporalTree.

An Example
In this section we shall show how our algorithm operates on the te-set in Figure 1 with respect to the action model in Figure 7.
Let us summarize the information the algorithm receives. Eight fault situations are involved; moreover we can exploit three sensors, each of which can show five different qualitative values: h -high, n -normal, l -low, v -very low, z -zero. Time labels correspond to natural numbers ranging from 0 to 7, and we assume they correspond to times measured by an internal clock which is started at the time of fault detection. There are four basic recovery actions a, b, c, d, such that d ≺ b ≺ a and d ≺ c ≺ a. The set of compound recovery actions is thus A = {{a}, {b}, {c}, {b, c}, {d}}; the ordering relation is pictured in 7, together with action costs.
BuildTemporalTree is first called on the whole te-set. None of the two terminating conditions is met (notice however that there are two observations that are not useful, since they do not discriminate: s 3 , 0 and s 3 , 1 ). Then the main function calls ID3ChooseSafeObs and consequently FindSafeObs. Since the global deadline is 2 we must check time labels 0, 1 and 2, starting from the last one.
Exploiting only observations with time label 2 we obtain the following partition: However in order to find the expected cost we still have to check if for some partition block the deadline has changed; this happens for {sit 1 , sit 6 } as well as for {sit 3 } and {sit 4 }. For the last two blocks it does not change anything -they already contain only one element. As to the first block, the deadline is now 5 and thus it is possible to further split the partition. Therefore we obtain that the partition for time label 2 is: After finding the partition, the algorithm computes the expected cost, which turns out to be: X E,t=2 = χ(Act(sit 1 )) · 1 8 + χ(Act(sit 6 )) · 1 8 + χ(Act({sit 2 , sit 5 , sit 7 , sit 8 })) · 1 2 + χ(Act(sit 3 )) · 1 8 + χ(Act(sit 4 )) · 1 8 = 100 · 1 8 + 10 · 1 8 + 70 · 1 2 + 20 · 1 8 + 50 · 1 8 = 57.5 Then the algorithm moves to time label 1; starting from P t=2 it adds observations with time label 1, obtaining a new partition: Deadlines move for {sit 7 } and {sit 8 }, but since these are singletons the new observations cannot further split the partition. The expected cost is now: Since X E,t=1 < X E,t=2 we can conclude that observations with time label 2 are not safe. We now move to time label 0, and we immediately realize that the new observations do not change the partition. Thus X E,t=0 = X E,t=1 , and safe observations are those with time label equal either to 0 or to 1.
Let us focus on call c 1 : again, none of the terminating conditions is met, therefore the algorithm invokes ID3ChooseSafeObs and thus FindSafeObs. Notice however that on this subset only a few observations are in UsefulObs: s 1 , 3 , s 2 , 3 , s 2 , 4 , s 2 , 5 and s 3 , 5 . The global deadline is 5.
First we find the partition (and the expected cost) for time label 5: X Ev,t=5 = χ(Act(sit 1 )) · 1 2 + χ(Act(sit 6 )) · 1 2 = 100 · 1 2 + 10 · 1 2 = 55 No additional observations can split further this partition and lower the cost; therefore we find that all valid observations are also safe. It is moreover obvious that all these observations have the same entropy, which is 0. Therefore the algorithm can non-deterministically select one of them; a reasonable criterion would be to select any of the earliest ones, for example s 1 , 3 .
Since the initial te-set for call c 1 is now split in two, there are two more recursive calls. However we can notice that if BuildTemporalTree is called on a te-set with a single element, the first terminating condition is trivially met (all the fault situations have the same recovery action). The function simply returns a tree leaf with the name of the proper recovery action. Figure 15.(b) shows the tree after c 1 has been completed. Now let us examine c 2 : the algorithm eliminates non-discriminating observations, and finds out that the set of useful observations is empty. Thus it builds a leaf with recovery action {b, c}. Let us pass to call c 3 . In this case none of the terminating conditions is met: the algorithm must then look for safe observations. The global deadline is 5, so we start examining time label 5, and we find: Much as happened for c 1 , no additional observation can further split the partition, so we can conclude that all valid observations are also safe. Figure 14 shows entropy for all valid observations; the earliest one with minimum entropy is s 3 , 2 and this the algorithm selects.
The two recursive sub-calls that are generated immediately terminate: {sit 4 } is a singleton, and in {sit 3 , sit 7 } both fault situations correspond to the same recovery action. The last recursive call, c 4 , has in input a singleton and thus immediately terminates. The final decision tree T is pictured in Figure 15.(c).  Thus the expected cost of the tree is: If we look back at example 8 we see that X E,A = 48.75, thus T has the minimum possible expected cost. Moreover, we can compare T with the tree T 1 of example 5. Also T 1 has the minimum possible expected cost, but T is more compact.

Conclusions
In this paper we introduced a new notion of diagnostic decision tree that takes into account temporal information on the observations and temporal constraints on the recovery actions to be performed. In this way we can take advantage of the discriminatory power that is available in the model of a dynamic system. We presented an algorithm that generates temporal diagnostic decision trees from a set of examples, discussing also how an optimal tree can be generated.
The automatic compilation of decision trees seems to be a promising approach for reconciling the advantages of model-based reasoning and the constraints imposed by the on-board hardware and software environment. It is worth noting that this is not only true in the automotive domain and indeed the idea of compiling diagnostic rules from a model has been investigated also in other approaches (see e.g., Darwiche, 1999;Dvorak & Kuipers, 1989). Darwiche (1999), in particular, discusses how rules can be generated for those platforms where constrained resources do not allow a direct use of a model-based diagnostic system.
What is new in our approach is the possibility of compiling also information concerning the system temporal behaviour, obtaining in this way more accurate decision procedures.
To the best of our knowledge, temporal decision trees are a new notion in the diagnostic literature. However, there are works in other fields that have some relation to ours, since they are aimed at learnig rules or associations that take into account time. Geurts and Whenkel (1998) propose a notion of temporal tree to be used for early prediction of faults. This topic is closely related to diagnosis, albeit different in some ways: the idea is that the device under examination has not failed yet, but by observing its behaviour it is possible to predict that a fault is about to occur. Geurts and Wehenkel propose to learn the relation between observed behavioural patterns and consequent failures by inducing a temporal tree.
The notion of temporal tree introduced by Geurts and Wehenkel is different than our temporal decision trees, reflecting the different purpose it has been introduced for. Rather than sensor readings, it consider a more general notion of test, and the tree does not specifies the time to wait before performing the tests, but rather the agent running the tree is supposed to wait until one of the tests associated to a tree node becomes true.
Also the notion of optimality is quite different: in the situation described by Geurts and Wehenkel the size of the resulting tree is not a concern. The tree-building algorithms aims then at minimizing the time at which the final decision is taken. In our algorithm, size is the primary concern, while from the point of view of time it suffices that diagnosis is carried out within certain deadlines. From the point of view of time alone, the apporach by Geurts and Wehenkel is probably more general than ours; the problem of considering the trade-off between diagnostic capability and time needed for diagnosis is one of the major extensions we are considering for future work on this topic (see below).
Finally, the algorithm proposed by Geurts and Wehenkel works in a quite different way than ours: it first builds the tree greedily, using an evaluation function that weighs discriminability power agains time needed to reach a result, and selecting at each step the texts that optimizes such function. Then it prunes the tree in order to avoid overfitting. On the other hand, our approach aim at optimizing the tree from the point of view of cost, and at the same time tries to keep the tree small with the entropy heuristic. We think that, since optimization can be carried out at no additional cost 9 with respect to the minimization of entropy, our approach can obtain better results, at least in those cases where one can define a notion of deadline.
The process of learning association rules involving time has also been studied in other areas, such as machine learning (see for example Bischof & Caelli, 2001, where the authors propose a technique to learn movements) and data mining. While the specific diagnostic tailoring of our approach makes it difficult to compare it with more generic learning algorithms, the connections with data mining may be stronger. Our proposal in fact aims essentially at extracting from series of observations those patterns in time that allow to correctly diagnose a fault: this process can be regarded as a form of temporal classification. A preliminary investigation of papers in this area (see Antunes & Oliveira, 2001 for an overview) seems to suggest that, whereas the analysis of temporal sequences of data has received much interest in the last years, not much work has been done in the direction of data classification, where temporal decision trees could be exploited.
This suggests an interesting development for our work, in particular as concerns its applicability in other areas. However, we believe that the algorithm we presented needs to be extended in order to be exploited in other contexts. In particular we are investigating the following extensions: • Deadlines could be turned from hard to soft. Soft deadlines do not have to be met, but rather define a cost associated to not meeting them. Thus not meeting a deadline becomes an option that can be taken into account when it is less expensive than performing a recovery action when the diagnosis is not complete. One could even define a cost that increases as the time passes from the expiration of the deadline. Such an extension would allow to model also the trade-off between discriminability power and time needed by the decision process, which we believe is the key to making our work applicable in other areas.
• Actions could be assumed to have a different cost depending on the fault situation; for example the action associated to a fault could become dangerous and thus extremely expensive if performed in presence of another fault.
9. From the point of view of asymptotical complexity.
On the long term, future work on this topic will be aimed at widening its areas of applicability, and investigating in deeper details its connections with other fields, such as fault prevention and data mining.

Appendix A. Proofs
This section contains the proofs of all propositions, lemmas and theorems in the paper. Proposition 15 Let T = r, N, E, L, T denote a temporal decision tree, and let l 1 , . . . , l u be its leaves. Then Proof. By induction on the depth of T. If T has depth 0 then it consists of a single leaf l and 10 holds trivially since examples(l) must be equal to E and P (E; E) = 1.
If T has depth > 0 then let T 1 , . . . , T k denote its direct subtrees and c 1 , . . . , c k denote their roots. We can regard each T i as an autonomous temporal decision tree compatible with te-set E i = examples(c i ). By induction hypothesis we have that: (13) Moreover by definition of expected cost: From (13) and (14) we thus obtain: Since the leaves of T are all and only the leaves of T 1 , . . . , T k , (15) is equivalent to the thesis. Proposition 16 Let T = r T , N T , E T , L T , T T , U = r U , N U , E U , L U , T U be two temporal decision trees compatible with the same te-set E and the same actions model A. If T is more discriminating than U then X E,A (T) < X E,A (U).
Proof. Rewriting equation (10) we obtain: Since T is more discriminating than U, we have that for all sit ∈ E: L T (leaf T (sit)) ≺ L U (leaf U (sit)) or L T (leaf T (sit)) = L U (leaf U (sit)) with at least one sit satisfying the first relation. By definition of χ it follows that for all sit: χ(L T (leaf T (sit))) ≤ χ(L U (leaf U (sit))) and for at least one sit: χ(L T (leaf T (sit))) = χ(L U (leaf U (sit))) Therefore if we compare the individual elements of the two sums in (16) and (17) we observe there exists at least one sit for which: χ(L T (leaf T (sit)))P (sit; E) < χ(L U (leaf U (sit)))P (sit; E) and for all other sit χ(L T (leaf T (sit)))P (sit; E) ≤ χ(L U (leaf U (sit)))P (sit; E) which concludes the proof.
Theorem 20 Let E be a te-set with actions model A. We have that: (i) There exists a decision tree T compatible with E such that X E,A (T) = X E,A .
(ii) For every temporal decision tree T compatible with E, X E,A ≤ X E,A (T).
In order to prove this theorem we introduce some lemmas.

Lemma 26
Let T be a temporal decision tree compatible with a te-set E. Then sit i ≈ sit j implies leaf T (sit i ) = leaf T (sit j ).

Proof.
We prove that sit i ∼ sit j implies leaf T (sit i ) = leaf T (sit j ), from which the lemma easily follows. Let us suppose that leaf T (sit i ) = leaf T (sit j ). This means that there is a common ancestor n of the two leaves such that sit i , sit j ∈ examples(n) and Val(sit i , L(n), T (n) ) = Val(sit j , L(n), T (n) ). Since sit i ∼ sit j this is possible only if T (n) > min{Dl(sit i ), Dl(sit j )}. But since T is compatible with E it must hold that T (n) ≤ Dl(examples(n)) ≤ min{Dl(sit i ), Dl(sit j )}, which contradicts the previous statement.

Lemma 27
Let E be a te-set with sensors s 1 , . . . , s m , time labels t 1 , . . . , t last and actions model A. There exists a temporal decision tree T = r, N, E, L, T such that X E,A (T) = X E,A .
Proof. In order to prove the thesis we construct a tree T with the same expected cost as the te-set.
Let us define a total order on observations in E as follows: s, t < s , t if either t < t or t = t and s precedes s in a lexicographic ordering. Let us denote by o 1 , . . . , o max the ordered sequence of observations thus obtained. We shall define T level by level (starting from the root, at level 1) giving the value of L and T for nodes at level h.
T has a maximum of max + 1 levels, where max is the number of observations. New levels are added until all nodes in a level are leaves (which as we shall see happens at most at level max + 1). Let thus n be a node at level h , and let s i h , t i h = o h if h ≤ max. We have: n is a leaf if h = max + 1, or Dl(examples(n)) < t i h ; an internal node otherwise.

L(n) = merge({Act(sit | sit ∈ examples(n)}) if n is a leaf;
s i h if n is an internal node.

T (n) = t i h if n is an internal node.
A decision-making agent running such a tree would essentially take into account all sensor measurement at all time labels until either there are no more available observations or it must perform a recovery action because a deadline is about to expire. Now we need to show that X E,A (T) = X E,A . Let l 1 , . . . , l u denote the leaves of T. We shall first of all prove that sit i ≈ sit j if and only if leaf T (sit i ) = leaf T (sit j ), or equivalently that This, together with equations (10) and (11) yields the thesis.
We already know from lemma 26 that sit i ≈ sit j implies leaf T (sit i ) = leaf T (sit j ); we need to show that the opposite is also true. Let us thus assume that sit i , sit j ∈ examples(l) for some l ∈ {l 1 , . . . , l u }. Let r = n 1 , n 2 , . . . , n H , n H+1 = l be the path from the root to l. We know from the definition of T that for In the first case we immediately obtain that sit i ∼ sit j and thus sit i ≈ sit j . In the second case, there must be sit k ∈ examples(l) such that Dl(examples(l)) = Dl(sit k ). Moreover, Dl(sit k ) = min{Dl(sit i ), Dl(sit k )} = min{Dl(sit j ), Dl(sit k )}. Since all considerations above apply also to sit k we thus have that sit i ≈ sit k and sit k ≈ sit j ; therefore by transitivity sit i ≈ sit j .

Lemma 28
Let T = r, N, E, L, T be a decision tree compatible with a te-set E with actions model A. Then X E,A ≤ X E,A (T ).
Proof. Let T be as defined in the proof of lemma 27. In order to prove the thesis it suffices to show that T is either equally 10 or more discriminating than T (see proposition 16). Actually we shall show that given sit ∈ E either L(leaf T (sit)) ≺ L(leaf T (sit)) or L(leaf T (sit)) = L(leaf T (sit)).
We know that examples(leaf T (sit)) ⊆ examples(leaf T (sit)). In fact, let sit be an element of examples(leaf T (sit)) different from sit itself: by construction of T we have that sit ≈ sit , and by lemma 26 it follows that leaf T (sit) = leaf T (sit ).
Let Thus having L(leaf T (sit)) = merge(A) and L(leaf T (sit)) = merge(A) we obtain that either the action selected by T is weaker than that selected by T, or it is the same. Now we can prove theorem 20. 10. Rather intuitively, two trees are equally discriminating if they associate to each fault situation the same recovery action. 11. We exclude terminal calls because they do not even compute Obs Update.
Proof. We shall prove (1) and (2) for every recursive call c (rather than only for c 0 ). The proof is by induction on the depth of the recursion starting from c. Let v 1 , . . . , v k be the possible values for o: then c has k inner recursive calls to BuildTemporalTree, which we shall denote respectively by c 1 , . . . , c k . We have that {E c 1 , . . . , E c k } is a partition of E * c . By definition of expected cost (14) we have that: Moreover, since also E c and E * c differ only in the observations P (E c i ; E c ) = P (E c i ; E * c ). Therefore we can write: In order to prove (1), we can apply the induction hypothesis X Ec i ,A ([[T]] c i ) ≥ X Ec i ,A ) and obtain: Now let us work on the right-hand side expression in 18: Notice however that {E c i /≈} is a partition of E * c /≈; in other words each η ∈ E * c /≈ belongs to exactly one set E c i /≈. In fact, splitting examples according to the value of one observation cannot split a class of undistinguishable observations. Thus the above equality becomes: This, together with 18, yields: As mentioned above, the only difference between E * c and E c is that the former has fewer observations. This implies that, if sit ≈ sit in E c , then sit ≈ sit in E * c as well. This means that E c /≈ is a sub-partition 12 of E * c /≈ in the following sense: we can partition every θ ∈ E * c /≈ in η(θ) = {η 1 , . . . , η k θ } such that for each η j there exists exactly one η j ∈ E c /≈ containing exactly the same fault situations as η j . This yields: Since each η j has the same fault situations as the corresponding η j , and necessarily for each η j there is a θ containing it, we have: χ(Act(η )) · P (η; E * c ) 12. Ec/≈ is not a sub-partition of E * c /≈ in the ordinary sense because they do not have the same set of observations.