HOW HUMAN-MACHINE TEAMS CREATE, EXPLAIN, AND RECOVER FROM COORDINATION BREAKDOWNS: A SIMULATOR STUDY OF DISTURBANCE MANAGEMENT ON MODERN FLIGHT DECKS

Size: px
Start display at page:

Download "HOW HUMAN-MACHINE TEAMS CREATE, EXPLAIN, AND RECOVER FROM COORDINATION BREAKDOWNS: A SIMULATOR STUDY OF DISTURBANCE MANAGEMENT ON MODERN FLIGHT DECKS"

Transcription

1 HOW HUMAN-MACHINE TEAMS CREATE, EXPLAIN, AND RECOVER FROM COORDINATION BREAKDOWNS: A SIMULATOR STUDY OF DISTURBANCE MANAGEMENT ON MODERN FLIGHT DECKS DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Mark I. Nikolic, M.S. ***** The Ohio State University 2004 Dissertation Committee: Nadine B. Sarter, Adviser Approved by David D. Woods Philip J. Smith Adviser Industrial and Systems Engineering Graduate Program

2

3 ABSTRACT In many domains, the introduction of automation technology is considered a mixed blessing because it has not only extended operator capabilities and increased the safety and efficiency of operations, but has also led to new cognitive demands, which have created new opportunities for errors and performance breakdowns. Considerable efforts have been directed at preventing erroneous actions and assessments through training, design, and procedures. However, error prevention alone will never be a sufficient strategy for improving safety in complex high-risk systems. Rather, a more effective solution requires a deeper understanding of how operators cope with their errors, or more appropriately, with the resulting disturbances to the monitored process. As the final step in a research program that included jump-seat observations, a flight instructor survey, and an incident database analysis, the first full-mission simulator study in this area was conducted with twelve airline pilots in order to examine (the effectiveness of) current pilot strategies for diagnosing and recovering from disturbances, and the impact of current automation design on these processes. Pilots flew a one-hour scenario which contained challenging events that probed pilots knowledge of, and proficiency in using, the autoflight system. A process tracing methodology was used which integrated behavioral and verbal data in order to identify patterns in strategies across pilots. Overall, pilots completed the scenario successfully but varied considerably in how they coped with disturbances to their flight path. Our results show that aspects of feedback design delayed the detection, and thus escalated the severity, of a disturbance. Diagnostic episodes were very rare due to pilots knowledge gaps as well as time-criticality. ii

4 Consequently, in most cases, generic, rather inefficient, recovery strategies were observed, and pilots tended to rely on high levels of automation when trying to manage the consequences of erroneous actions or assessments. Furthermore, our scenario illustrated the role of external agents in coordinating recovery actions by various participants in the system. Our findings are discussed in the context of disturbance management and the development of cognitive tools to support this process. iii

5 To Bébé and Bug iv

6 ACKNOWLEDGMENTS I would like to express my sincerest gratitude to my adviser, Nadine Sarter, for all of the opportunities, support, and attention she gave to me and my academic endeavors. It has made for a memorable graduate experience across eight years and two schools that I will always appreciate and take with me. I also thank my committee members, David Woods, and Phil Smith, for their guidance during this project and for the thought-provoking courses that preceded it. I am very grateful to Randy Mumaw for many helpful discussions and with help in providing access to the resources required for this project. I also received invaluable assistance from Jack Bard during the development and implementation of the scenario. And many thanks to the study participants that volunteered to spend several hours in a simulator with a curious graduate student. This research was supported by a grant from the Federal Aviation Administration and by a Presidential Fellowship awarded to me from the Ohio State University. And to my wife, Sara, thank you for everything. v

7 VITA B.S. Psychology, University of Illinois at Urbana-Champaign Graduate Teaching and Research Assistant, University of Illinois at Urbana-Champaign M.S. Psychology, University of Illinois at Urbana-Champaign 2000 present.. Graduate Research Associate, The Ohio State University PUBLICATIONS Research Publications 1. Nikolic, M.I. & Sarter, N.B. (2001). Peripheral Visual Feedback: A Powerful Means of Supporting Attention Allocation and Human-Automation Coordination In Highly Dynamic Data-Rich Environments. Human Factors, 43(1), Nikolic, M.I., Orr, J., & Sarter, N.B. (2004). Why Pilots Miss The Green Box: How Display Context Undermines Attention Capture. International Journal of Aviation Psychology, 14(1), FIELDS OF STUDY Major Field: Industrial and Systems Engineering Area of Study: Cognitive Systems Engineering Minor Field: Cognitive Science vi

8 TABLE OF CONTENTS Abstract...ii Acknowledgments... v Vita...vi List of Tables...ix List of Figures... x Introduction... 1 What Is Human Error?... 5 Reason: The GEMS Error Taxonomy... 5 The Cognitive Systems Approach... 8 Error as Adaptation to variability... 9 Error Management Error Detection Error Explanation Error Recovery Error Management: Extending the Framework Error Management: A form of Disturbance Management? Research Goals Brief overview of Autoflight and the Flight Management System Methods Research activities to-date Instructor survey Incident report analysis Modeling work Simulator Study Participants Simulator Procedure Scenario Scenario Events Event 1: LNAV Overlap Event 2: PESCA Climb Event 3: LNAV Capture Event 4: VNAV ALT mode Event 5: Descent restrictions Data collection and analysis Results Event 2: PESCA CLIMB vii

9 Event 3: LNAV Capture Event 4: VNAV ALT Event 5: DESCENT RESTRICTIONS Event 1: LNAV OVERLAP Discussion Summary of results Detection Diagnosis and Explanation Error Recovery Strategies Revisiting Disturbance Management Design Recommendations Methodological Considerations Conclusion List of References Appendix A: Scenario script Appendix B: Observer Form viii

10 LIST OF TABLES Table 1. Error taxonomy, adapted from Reason (1990)... 7 Table 2. Participant flight hours Table 3. Summary of scenario performance Table 4. Pilot performance summary of descent restrictions ix

11 LIST OF FIGURES Figure 1. Error Recovery Strategies (adapted from Jambon, 1998) Figure 2. Jambon s Error Recovery Taxonomy Figure 3. Error Management: A Linear Model Figure 4. The Error Management Cycle Figure 5. Locations of pilot interfaces on the flight deck Figure 6. Proposed context-sensitive recovery model Figure 7. Overview of the scenario showing waypoints on the flight plan Figure 8. LNAV Event Figure 9. VNAV mode transition diagram Figure 10. Vertical profile view of descent restrictions Figure 11. Composite of abstracted solution paths for PESCA Climb event Figure 12. Solution path for Pilot 3 in the PESCA Climb Event Figure 13. Solution path for Pilot 4 in the PESCA Climb Event Figure 14. EICAS display Figure 15. Solution path for Pilot 7 in the PESCA Climb Event Figure 16. A schematic of the CDU 'ECON CLM' page Figure 17. Composite of abstracted solution paths for LNAV event Figure 18. Example of a Navigation (Map) Display Figure 19. CDU 'LEGS' page as it would appear during LNAV Capture Event Figure 20. Solution path for Pilot 8 in the LNAV Capture Event Figure 21. Composite of abstracted solution paths for VNAV ALT event Figure 22. Solution path for Pilot 3 in the VNAV ALT Event Figure 23. Solution path for Pilot 5 in the VNAV ALT Event Figure 24. Solution path for Pilot 12 in the VNAV ALT Event Figure 25. Solution path for Pilot 11 in the VNAV ALT Event x

12 INTRODUCTION Human operators of highly automated systems are crucial to successful joint humanmachine performance because they can adapt their behavior and systems to handle the demands and constraints of the operational environment in a way that context-insensitive machines cannot do on their own. In that sense, human practitioners fill a context gap between the machine and the world by instructing, monitoring, and, when necessary, intervening with, automated systems to produce the desired behavior or outcome. Operators often perform this function effectively, under conditions of time-pressure, with incomplete knowledge, and with clumsy tools (Wiener, 1989) that exacerbate workload under high tempo operations and provide poor feedback of current and future behavior. When the desired outcome is achieved, the entire system is perceived to be functioning properly and successfully. Yet, when this adaptive process breaks down and leads to system failures, it tends to be blamed on human error. For example, human error is cited as the cause, or a contributing factor, in the majority of incidents and accidents in the complex event-driven domain of aviation (e.g., Boeing, 1994). Note that, in the above context, the term error is used as a post-hoc attribution of blame for a negative outcome (Reason, 1990; Woods et al, 1994). This perspective is problematic for various reasons. First, it fails to consider that operators behave in a locally rational way, i.e., that their actions make sense at the time given their knowledge 1

13 and situation assessments. Also, defining errors based on consequences ignores the loose coupling between process and outcome (Woods et al., 1994). In other words, an error does not inevitably lead to an undesired outcome and, conversely, a desired outcome is not necessarily the result of flawless performance. Loose coupling exists, in part, because contextual factors (such as timing or system state) affect the consequences of an action. In addition, human operators are quite effective at managing (i.e., detecting, diagnosing, and recovering from), and thus mitigating the consequences of, errors or, more appropriately, breakdowns in human-machine coordination. Empirical research has shown, for example, that commercial airline pilots commit up to 5-10 errors per hour (Amalberti, 1996); yet the very low accident rate in this domain illustrates that these systems exhibit a strong degree of error resilience (see Seifert and Hutchins, 1992, for an example in the maritime navigation domain) thanks to pilots successful management of errors and resulting disturbances, i.e. abnormal conditions or malfunctions where actual process state deviates from the desired function for the relevant operating context (Woods, 1994). Given the importance of error and disturbance management for system safety, surprisingly little is known about the underlying processes and factors that contribute to success or failure. Research in the area of human error to date has focused primarily on the classification, causation, prevention, and detection of erroneous actions and assessments. In contrast, few studies have examined the diagnostic and recovery processes that follow detection and serve to alleviate negative consequences of an error. 2

14 Also, little empirical research exists on error management specifically in the context of complex dynamic domains where erroneous actions can have multiple cascading effects that need to be managed in parallel with continuing to monitor and control the (potentially affected) ongoing process. In these environments, error management tends to occur under time pressure and often requires coordination of multiple goals and constraints. Thus, it can be reframed and studied under the existing framework of disturbance management, where human activities and human-machine mismatches (rather than malfunctions or system failures) contribute to the evolution of a problem. Disturbance management refers to the activity of diagnosing the underlying source(s) of a disturbance (i.e., a deviation from a desired process state for the given context) in parallel with coping with the disturbance itself by maintaining the integrity and goals (i.e., efficiency, safety) of an underlying dynamic process (Woods, 1988). Some important questions in this context are: (When) Is diagnosis necessary for recovery? What recovery strategies do operators choose in various task contexts? How effective are these strategies, and does this depend on the type of error or disturbance? How do current tools and domain context constrain the solution space? Answers to these questions and thus better insight into the nature and effectiveness of disturbance management processes and associated operator strategies are crucial for developing new tools that can support operators in handling disturbances even more effectively. The research reported in this paper represents an important step toward this goal. 3

15 The following sections will lay out a foundation for the present research, beginning with a discussion of the nature and definition of human error and error types. Errors are then reframed as human-machine coordination breakdowns or disturbances within a joint cognitive system, and the focus shifts to the process of managing these disturbances effectively. Past research within the areas of error and disturbance management is discussed and used to motivate the current study and methodology: a scenario-based simulation study of pilot error and disturbance management on highly automated aircraft. 4

16 WHAT IS HUMAN ERROR? Reason: The GEMS Error Taxonomy Although the importance of studying error has long been recognized from the perspective of safety and design, there is no universally agreed-upon definition of human error. Reason (1990), for example, defines errors as occasions in which a planned sequence of mental or physical activities fail to achieve their intended outcome. Other definitions of error consider them as deviations from intention, expectation, or desirability (Senders & Moray, 1991). And Hollnagel (1993) defines an erroneous action as one that fails to produce the expected result or produces an unwanted consequence. Ultimately, these definitions require judgments of the outcome of an action, in hindsight, to determine whether an error has been made. However, a loose coupling exists between process and outcome (Woods et al., 1994) which means that not every error will lead to an undesired outcome, and not every desired outcome is the result of a perfect process. A new look at human error emerged from the observation that errors are heterogeneous events that cannot not all be placed in the same category. Erroneous actions and assessments can take many different forms and can be described and categorized along various dimensions. They can be described in terms of their phenotype, i.e., their surface 5

17 features and manifestation in domain-specific terms (e.g., controlled flight into terrain, altitude deviation, or overshooting a runway), as is often done in the context of safety statistics. However, cross-domain comparisons are facilitated by discussing errors instead in terms of their genotypes, i.e., their underlying domain-independent characteristics (Hollnagel, 1993), such as the cognitive stage or the performance level at which they occur. One of the most well-known and most widely used genotype-based error classification taxonomies is Reason s (1990) generic error-modeling system (GEMS), which is based on Rasmussen s (1983) model of different levels of human performance. Rasmussen s framework was developed in order to describe operator behavior (and errors) during supervisory control tasks. He distinguished between three levels of performance: (1) skill-based, for routine activities performed in an automatic way, (2) rule-based, for familiar problems that can be handled by stored rules, and (3) knowledge-based, for novel problems for which no rules exist and a plan or solution must be generated on-line. Using Rasmussen s framework, Reason distinguishes between monitoring failures that occur during the performance of routine actions (i.e., skill-based errors) and problemsolving failures which occur at the planning or intention formation stage. The latter include rule-based and knowledge-based errors. These three categories of errors can be related to a different error classification scheme that describes errors in terms of the cognitive stages at which they occur (see Table 1). 6

18 Cognitive stage Error type Performance level Planning Mistakes rule-,knowledge-based Storage Lapses skill-based Execution Slips skill-based errors of commission - erroneous action taken errors of omission - required action not taken Table 1. Error taxonomy, adapted from Reason (1990) Slips and lapses are skill-based errors that involve the unintentional commission (slips) or omission (lapses) of an action. In other words, these errors involve the incorrect execution (or forgetting) of a required and intended action to achieve the plan. Since skill-based actions tend to be heavily automated, errors occurring at that level are primarily due to attentional failures. In contrast to slips and lapses, mistakes involve the formulation of an inappropriate plan or the selection of an inadequate strategy for achieving a goal. Mistakes occur at the planning or intention formation stage, regardless of whether the subsequent execution of the plan is flawless. Mistakes can be either ruleor knowledge-based. Rule-based mistakes involve the application of an incorrect rule or the incorrect application of an appropriate rule for solving a problem. Knowledge-based mistakes occur in the context of novel situations for which no prescribed solutions exist, requiring on-line problem solving, often with limited information and under time pressure. 7

19 The Cognitive Systems Approach Despite their usefulness in making cross-domain comparisons, genotype-based error taxonomies do not help us understand why errors or human-machine system breakdowns occur. For example, the above-mentioned error categories do not necessarily address the fact that, in many real-world domains, errors occur in the interaction between human operators and advanced automation technologies. Focusing on this collaborative aspect of performance, the joint cognitive systems approach (Woods et al., 1994) views erroneous actions and assessments as symptoms of a mismatch between the design of computerbased technology and the information needs and attentional resources of the human operator. This approach also highlights an important distinction that is often confounded in many studies of human error error as an outcome failure and error as a deficient process (Woods et al., 1994). For example, Hollnagel (1993) discussed three senses in which the label error is used in the context of safety research. Error as the cause of a failure. This interpretation assumes that error is a categorical type of human behavior that precedes and generates a failure. For example, the accident was due to human error. Error as the failure itself. This description of error is just a redundant statement that an adverse event occurred. For example, the crash was an error. 8

20 Error as a process. Here, the departure from a model of good practice is emphasized. However, it is not always clear what that model is, and whether there is a single correct view. Contemporary perspectives recognize that behavior labeled as human error is not a flaw or weakness that resides within people, but is rather a symptom of mismatches within a system (Rasmussen, 1986; Woods et al., 1994). The remaining discussion of error focuses on error in the third sense, as a process or rather, as part of the larger process of adaptation. Error as Adaptation to variability As stated by Ashby s (1956) Law of Requisite Variety, variety in the control system must be sufficient to match (i.e., counteract) the variability in the world in order to maintain effective regulation. In other words, the human operator s role is to adapt the system around externally and internally generated variability. Despite sophisticated designs and extensive operating procedures, there will always be a gap between how a system is conceived to work in practice, and how it must be made to work in a situated context. For example, plans and procedures will be inherently incomplete and brittle, as they cannot be sufficient or applicable for all possible contingencies that a system may encounter (Suchman, 1987). Therefore, adaptations must occur in order for a complex humanmachine system to function. However, adaptive processes are not perfect and often take place under conditions of time stress, incomplete knowledge and uncertainty. Thus, there 9

21 is the potential for two types of human-machine mismatches: cases in which human variability is unacceptably great or not appropriate for the given context (over-adaptation) and cases when the need to adapt is not recognized or when human variability is insufficient to cope with changes in system performance (under-adaptation). The question then is how do operators recognize and recover from these mismatches? How do they re-adapt the system back to a desired state? We may begin to look for answers to these questions by reviewing a growing body of research on human error and error management. To date, most error-related research has focused on classification taxonomies (Bove and Anderson, 2003; Klinect et al., 1999; Norman, 1981; Reason, 1990) and the prevention of errors through improved training and design (Hollnagel, 2001; Hutchins, 1994; Nikolic and Sarter, 2001; Norman, 1988; Sklar and Sarter, 1999; Spiro et al., 1988; Woods et al., 1981). However, taxonomies alone do not produce understanding of (or solutions to) erroneous actions (Dekker, 2003), and errors and human-machine mismatches will continue to occur despite revised procedures, improved training, and more effective designs. Also, the desirability of completely eliminating errors has been questioned since it may limit the opportunities for learning and the tuning of adaptive behavior (Rasmussen et al., 1994). Therefore, rather than focusing solely on preventing errors, another approach seems to be the mitigation of the negative consequences of erroneous actions through support for error management (EM), which involves the processes of error detection, error explanation, and error correction or recovery. The concept of EM and the research in this area is elaborated in the following section. 10

22 ERROR MANAGEMENT Error management is an adaptive process that the operator engages in to minimize disturbances within a system. The process of error management has traditionally been viewed as comprising three stages: error detection, error explanation (or diagnosis), and error recovery (or correction). Variants of these terms have been used by different authors to refer to the processes that occur between the realization that an error has occurred and the correction of that error (Kontogiannis, 1997, 1999; Schaaf, 1988; Schaaf & Kanse, 2000; Wiener, 1993; Zapf & Reason, 1994). Note that the related concept of error tolerance (Rouse & Morris, 1987) must be distinguished from our definition of error management. Error tolerance refers to a system that is designed such that erroneous user actions do not lead to immediate irreversible catastrophic consequences. In that sense, error tolerance can be considered a means to support (rather than a component of) error management. Research on error management is still in its infancy. A small number of empirical studies on error detection have been conducted in a variety of domains, but research on error explanation and recovery is limited almost exclusively to analytical work. The following sections will provide a review of the literature on error management. 11

23 Error Detection To date, most research in the area of error management has focused on error detection. A seminal error self-detection study was conducted by Allwood (1984) who asked university students to think aloud while attempting to solve two statistical problems at their own pace. The task involved phases in which subjects worked toward the solution (progressive phase) and phases in which they stopped to check on their work (evaluative phase). Three general types of error-detection behaviors were observed: standard check (SC), direct error-hypothesis formation (DEH), or error suspicion (ES). Standard checks are initiated independently of an observed outcome. They are internally driven routine checks on progress. In contrast, direct error-hypothesis formation (DEH) and error suspicion (ES) behavior are data-driven and triggered by environmental feedback indicating a conflict between expected and observed results. The difference between DEH and ES is that, in the case of DEH, the person notices a specific error or problem whereas, in case of ES, he/she suspects that there is some problem without knowing its exact cause and nature. In this study, DEH and ES episodes were the most effective mechanisms for error detection. One limitation of this study is that the verbal protocol data do not allow us to determine whether DEH or ES episodes arose from stored error representations or mismatches between expectation and outcome. Furthermore, the selfpaced nature of the statistical task leaves open the question whether the observed error detection processes occur in dynamic event-driven domains to the same extent and in the same form. 12

24 Sellen (1994) approached the study of error detection using a diary study to collect a corpus of 600 errors from 75 individuals. Although potentially subject to memory distortions and biases, the analysis of these self-reported data resulted in a different taxonomy of error detection mechanisms: action-based detection, outcome-based detection, and limiting functions. Action-based detection relies on monitoring one s own activities during the execution stage, whereas outcome-based detection and limiting functions rely on feedback from completed actions as part of an evaluation phase. In both the Allwood and Sellen taxonomies, feedback from an action itself or information about the outcome of an action can be used to detect an error. In one case, the person actively looks for the information to perform a standard check on progress or because of an error suspicion; in the other case, the information attracts the person's attention because it shows something unexpected or undesirable. A field observation study by Bagnara et al. (1987), which examined errors and error detection in experienced steel mill operators, found a pattern of results similar to the one emerging from the preceding studies. The operators in this study made 95 errors and detected 74 of them. Detection episodes were classified into three categories according to the way in which mismatches were perceived: inner feedback, external feedback, and forcing functions. These categories bear a close resemblance to those proposed by Sellen (1994), namely action-based and outcome-based detection which map onto inner and external feedback, respectively, and forcing or limiting functions which both authors list in their classification schemes. The observed association between error types and 13

25 detection mechanisms was consistent with earlier research, except that knowledge-based mistakes were detected most often as the result of SC behavior. This finding points to a possible difference between naïve subjects and domain experts, and it highlights the importance of studying domain practitioners if one wants to understand error detection in applied settings. The limitation associated with field studies like this one is that the researcher cannot control for events nor guarantee that a full range of operator behaviors and responses to error will be observed. Several studies have examined the role of experience in error detection. For example, Zapf et al. (1994) found that error detection in the context of working with office computer applications was higher with experienced users in the field compared to laboratory experiments involving novel tasks and inexperienced subjects. This suggests that experienced practitioners develop error detection (error-sensitive) strategies that become adaptively tuned to their tasks or method of working. Further support for the development of error-sensitive strategies with experience was found by both Allwood (1984) and Sellen (1994). Allwood noted that proficient problem-solvers tended to perform more standard checks to defend against their own potential errors, and were also more likely to trigger hypothesis-generation in response to perceived contradictions of their expectations, as compared to poor problem-solvers. Similarly, Sellen found that people tended to check the outcome of their actions in situations in which they considered themselves to be error-prone 14

26 Woods (1984) completed a review of research on error detection by expert practitioners in nuclear power plant control rooms. Data from 23 crews in 99 simulated emergency scenarios revealed that operators made two general classes of errors: state identification problems (misdiagnoses, or mistakes), and execution failures (slips). Only half of the 20 execution errors, and none of the 19 mistakes, were detected by the operators who made them. Mistakes were detected only by a fresh viewpoint in the form of an external agent. Woods attributed the failure of self-detection in these cases to a form of fixation that prevents the re-evaluation of a prior assessment, even in the presence of contradictory data. This reinforces the notion that one s own mistakes can be particularly difficult to detect because there is no discrepancy between intention and action (and thus, no directly observable feedback). It is important to note that observers tend to be effective at detecting someone else s mistakes only when they have an understanding of the task and domain. The use of simulated scenarios in the research reviewed by Woods is an effective technique for eliciting realistic behaviors from experienced participants, while, at the same time, gaining some control over the occurrence of events or situations of interest. In summary, the above findings support the notion that errors are not uncommon occurrences, especially among expert practitioners. In addition, these studies are in general agreement that error detection mechanisms are based on data-driven (e.g. outcome-based, external feedback) and knowledge-driven (e.g., error suspicion, standard checks, internal feedback) processes that indicate a mismatch between actual and expected outcomes. The former is typically addressed through improved feedback design 15

27 in order to make departures from expected outcomes salient, whereas the latter process is typically approached through training to improve mental models which, in turn, allow for the formation of expectations with which to compare outcomes. In dynamic contexts involving competing attentional demands, data-driven and knowledge-driven processes constantly interact to determine the focus of attention (see Neisser s (1976) perceptual cycle) and successful error detection is a product of this interaction. While not the focus of the current research, error detection is the necessary starting point for the diagnosis and recovery stages, which have received far less attention in the literature and are reviewed below. Error Explanation Once an error or disturbance has been detected, the next possible step is to diagnose its cause 1. A computer programming study by Bagnara and Rizzo (1989) has identified several behavior patterns that could emerge after the detection of an error or disturbance: a) automatic (fast) causal analysis, in which the user immediately recognizes the cause, b) conscious (and thus more effortful and slower) causal analysis, which traces back through prior operator actions and system responses to generate a plausible hypothesis and a related recovery plan, or c) an explorative causal analysis, which reflects uncertainty about the causal change and involves exploration, and iterative testing, of various candidate hypotheses. Note how the last two techniques involve an abductive reasoning process (i.e., inference to a best explanation from a collection of observations) that has 1 While the error management literature refers to this phase as explanation, we prefer the term diagnosis in order to avoid confusing the cognitive activity with the artificial intelligence connotation of machinegenerated error messages. 16

28 been identified as characteristic of diagnosis during disturbance management (Woods, 1994). These different forms of backward tracing, listed in increasing order of time and cognitive resources required, may not always be necessary in order to recover successfully from an error. For example, diagnosis is redundant during rule-based behavior, when the error and the corresponding solution are known (either from training or from having previously encountered the error). In contrast, error diagnosis may be necessary during knowledge-based performance when an error occurs for the first time, or when the familiar solution is not appropriate given the present context. Situational factors such as time stress or poor feedback can impede or even prevent this phase from occurring before immediate error correction or recovery actions are required. Eventually, in event-driven domains, operators have to trade off the need to understand an error with the need to act and prevent negative consequences. To date, there is very little empirical research that focuses specifically on the diagnosis phase of error management. One of the few examples is a review of critical incidents in a steel plant (Schaaf & Kanse, 2000) which showed that the error recovery process in that environment rarely involved an analytic phase. Once an error has been detected and/or diagnosed, attempts can be made to recover. Most existing research related to the error recovery stage is analytical in nature. It will be reviewed in the following sections. 17

29 Error Recovery Error recovery refers to the methods and strategies used to correct an error after it has been detected. Three possible corrective methods have been proposed (Dix et al., 1993; Mo and Crouzet 1996): backward, forward, and compensatory recovery (see Figure 2). After detecting an error, users can attempt backward recovery, which brings the system back to the original state before the error was made. This requires a means to reverse the effects of actions that were taken. Three generic kinds of backward recovery are described by Lenman and Robert (1994): the undo, cancel, and stop functions. Ideally, backward recovery reverts the system back in time, and removes any side-effects that would have been produced by the error. In contrast, forward recovery proceeds toward the goal, and may involve bringing the system to an intermediate stable state so that a solution can be found later on. This strategy is also known as buying time or safing the system and can be used either when critical equipment is damaged in the case of a fault, or when the effects of an error need to be immediately corrected prior to finding a more optimal solution. In addition to forward and backward recovery, a third strategy, compensatory recovery, involves activation of redundant equipment to bring the system to the desired goal state. If we adopt the perspective of the operator who must activate the redundant systems, it is not clear that this is a separate strategy. Instead, we may consider compensatory recovery as a special case of forward recovery. 18

30 In an observational study of pilots, Klinect et al. (1999) observed three types of responses to procedural, communication, proficiency, and operational decision errors by flight crews: trap (36% of errors), exacerbate (11% of errors), or no response (53% of errors). Trapping an error is defined as actively managing it to an inconsequential outcome. In contrast, an error can be exacerbated through mismanagement which can induce additional errors or undesired states. When no response was observed, the outcome was either inconsequential or linked to undesired aircraft states (illustrating the loose coupling between process and outcome that was outlined earlier). The authors point out that 36% of errors with no response led to undesired aircraft states. As a result, crews were managing the potential consequences of the undesired state rather than managing the error, which may be regarded as a form of forward recovery. The usefulness of findings from this study is limited by the authors use of phenotypical error categorizations which do not allow for generalizations. Also, the data from this study do not provide insights into the processes by which errors were managed by pilots. However, it illustrates a link between errors, the disturbances they can create, and the need to manage both, which supports a disturbance management approach to understanding error management. In general, the dynamic nature of many domains does not allow one to recover perfectly to the initial or final intended state. Rather, only approximate recovery states (Figure 1) are possible after an error has occurred. In event-driven domains, such as aviation, forward or imperfect forward recovery is typically the only method available, and many of the non-normal or emergency procedures in these fields of work are examples of this recovery strategy. Furthermore, if one views the cost of an error as the cost of the 19

31 corrective measure in addition to the cost of (re)executing the correct task, then either forward recovery may prove to be less costly than full backward recovery (Jambon, 1998). Approximate initial state I' Initial state I Correct action Final Expected State F Backward recovery Erroneous action F' Approximate Final Expected State Imperfect backward recovery E Erroneous state Forward recovery Imperfect forward recovery Figure 1. Error Recovery Strategies (adapted from Jambon, 1998) In addition to the backward-forward recovery distinction, Jambon (1998) adds another dimension to his recovery taxonomy: generic and planned recovery approaches. Generic recovery mechanisms include functions such as undo, cancel, and stop which are wellknown to the operator, rapid, and involve little or no planning effort. Planned recovery approaches, in contrast, have a higher time cost but may ultimately be more time-efficient than backward recovery. Hence, the combination of forward/backward recovery with generic/planned recovery tasks yields four possible types of recovery (see Figure 2). 20

32 Backward Forward Planned Sudden system fault recovery System fault recovery, lowcost error recovery Generic Undo, cancel Performance of forgotten actions Figure 2. Jambon s Error Recovery Taxonomy Jambon is one of few authors to address the implications of static versus dynamic systems for error recovery. He points out that, in dynamic systems, an error can worsen over time, and, as a result, recovery functions may not always be available. Furthermore, Jambon acknowledges that his taxonomy assumes that the user always wants to recover from an error when, in fact, this may not always be the case. Much like error explanation, error recovery may be forgone in dynamic systems if time does not allow it. Rather, one may be required to live with the error, if that is a tolerable solution. This strategy implies that the original goal state is revised to the current or an achievable state. Embrey and Lucas (1988) examined the implications of error type for recovery strategy. Based on their findings, these authors argue that the difference between recovery from slips and lapses and recovery from mistakes has to do with the amount, type, and timing of feedback about the error, and with the relationship between the factors that cause the error and those that influence its detection. Specifically, recovery from skill-based slips 21

33 and lapses tends to be independent of the process which caused the error, and the probability of success is high assuming appropriate feedback is provided, either through the system or from additional team members or supervisors. In other words, recovery in these cases does not depend on the individual operator s understanding of how the error occurred. This suggests that recovery of skill-based errors can be recovered through skillbased automatic responses to routine errors, such as an undo function. The opposite is true of rule-based and knowledge-based mistakes, whose recovery depends on knowledge-based performance, requiring the generation of a yet unknown procedure, and which have a lower probability of recovery due to cognitive biases, such as fixation. This suggests that explanation may be beneficial for formulating effective procedures for recovering from mistakes. 22

34 ERROR MANAGEMENT: EXTENDING THE FRAMEWORK Thus far, we have identified three stages of error management: error detection, error explanation, and error recovery. Furthermore, we have considered relevant research that has been conducted with respect to each of the stages. Most research to date appears to regard error management as a linear process that moves through three discrete stages in the listed order. Error detection occurs first, which may then lead to an explanation/diagnosis phase to find the source of the error. As we have seen, diagnosis is not always a prerequisite to move on to the (sometimes urgent) recovery stage, in which one attempts to reverse or minimize the negative effects of the error. The model suggested by the literature (see Figure 3) gives a simplified account of error management-related activities performed by an operator, but fails to capture systemic and dynamic aspects of the error management process in applied work domains. 23

35 Expectations Mismatch Detection Feedback Error Why? What? When? Explanation Approximate initial state I' Initial state I Backward Erroneous action Correct action Intermediate stable state Intended state F F' recovery Imperfect backward recovery Recovery E Erroneous state Forward recovery Figure 3. Error Management: A Linear Model (note: the error explanation box is shown in grey to indicate that this stage is often skipped) Most of the studies that have been reviewed in this report have studied or analyzed the detection, explanation, or recovery stages in isolation. It is important to realize, though, that the relationship between these stages is not necessarily hierarchical or unidirectional. It may help to adopt a different perspective which views detection, explanation, and recovery not as discrete stages that must be performed in a fixed order, but that can occur concurrently or in alternating fashion (see Figure 4). Data from an incident analysis in the chemical process industry have suggested the cyclic nature of these activities. Kanse and 24

36 Schaaf (2001) identified several patterns of responses to system failures that involved different sequences of short-term and long-term explanation and recovery behaviors after detecting a problem. These authors found that immediate corrective actions are often implemented prior to the occurrence of more elaborate time-consuming explanation and recovery processes. In dynamic systems, factors that could influence the flow of error management activities include changing evidence about the existence of errors, changes in the environment that may alter recovery strategies or effectiveness, and the need to explain errors in order to detect and/or recover from them. For example, observations on highly automated flightdecks (e.g., Sarter and Woods, 1994, 2000) have shown that pilots can be surprised by unexpected automation behavior. Instructions that were programmed hours earlier may have been erroneous, and indications of the error may appear at disparate times and distributed across various instruments. While attempting to explain the error, the dynamic nature of flight may require immediate recovery actions, while at the same time producing new effects of the original error which, in turn, must be handled (see disturbance management, discussed below). Alternatively, certain errors may not be detected and/or explained until a preliminary recovery process has been completed, which then provides time and perhaps additional information required for the explanation of the error. For example, an automation programming error that produces a course deviation must usually be corrected immediately by the pilot, who can then determine the source of the error, perhaps in consultation with other crew, once the aircraft is back on course. 25

37 Expectations Mismatch Detection Feedback Error Why? What? When? Explanation Approximate initial state I' Initial state I Backward Erroneous action Correct action Intermediate stable state Intended state F F' Recovery recovery Imperfect backward recovery E Erroneous state Forward recovery Figure 4. The Error Management Cycle (note: the error explanation box is shown in grey to indicate that this stage may be skipped) 26

38 ERROR MANAGEMENT: A FORM OF DISTURBANCE MANAGEMENT? Past research on error management has, for the most part, focused on erroneous actions and assessments that occur in the context of single self-paced tasks which are performed with simple tools in static environments. These conditions allow operators to interrupt ongoing activities and focus on managing an error and its usually self-contained effects at any time. In contrast, the application domain for the present study is aviation, a highly complex dynamic and event-driven domain. Aviation is characterized by the need for multitasking and by frequent changes to both system state (i.e., the aircraft is moving along a path, the automation status and behavior changes accordingly) and the state of the environment in which it operates (i.e., the relative locations of terrain and other aircraft is changing, instructions from air traffic control modify the route). In this environment, errors or, more appropriately, human-machine mismatches can produce effects that interact, cascade and escalate over time. When the consequences of erroneous actions and assessments need to be managed in such an environment, it may be more appropriate to characterize this process as disturbance management (rather than error management). Disturbance management refers to the activity of diagnosing the underlying source(s) of a disturbance (i.e., a deviation from a desired process state for the given context) in parallel with coping with the 27

39 disturbance itself by maintaining the integrity and goals (i.e., efficiency, safety) of an underlying dynamic process (Woods, 1988). In the aviation domain, for example, a pilot needs to diagnose the source (for example, an erroneous input to the FMS) of an observed disturbance (such as a deviation from the flight path) and cope with the disturbance (by bringing the airplane back on course) while maintaining the integrity of the underlying process (i.e., while continuing to fly the airplane). The cognitive activities involved in disturbance management (see Woods, 1994) clearly overlap with those involved in error management in dynamic environments: 1) recurrent situation assessments that serve to detect the existence of a problem, 2) diagnostic search activity in order to identify (explain) the source of the observed behavior this can be likened to error explanation, and 3) response selection, adaptive planning, and plan generation in an attempt to recover from the disturbance. Disturbance management in contrast to error management in static domains requires the operator to coordinate these information gathering and response activities while handling competing attentional demands and time pressure (Woods, 1994). Field studies in several dynamic event-driven domains have identified generic types of incidents that highlight the temporally evolving sequence of events and process behaviors involved in disturbance management. For example, decompensation incidents occur when automated systems detect and automatically try to counteract the unwanted effects produced by a fault. As the magnitude of the disturbance increases, and in the absence of timely operator intervention, automatic systems may reach the limit of their ability to 28

40 handle the problem, and the system is said to rapidly collapse, or decompensate (Cook, Woods, and McDonald, 1991). Decompensation can be a surprising event to an operator who was unaware of the system s activities (which had been masked by the system s countervailing influence) due to a lack of feedback or system observability. The temporal evolution of a disturbance can also be more gradual as illustrated by going sour incidents in which the cascade of disturbances and associated cues slowly accrues over time (Woods and Sarter, 2000). This scenario presents an attentional challenge for the operator who needs to process and integrate relevant cues in multiple places to notice and identify the anomalous pattern. While disturbance management is usually discussed in the context of system faults (as in the above two cases), the same activities tend to be involved in handling the consequences of breakdowns in the interaction between humans, machines, and the complex dynamic environment in which they collaborate. For the remainder of this document, we will therefore use the term disturbance management to refer to pilots efforts to cope with the effects of automation-related erroneous actions and assessments. While parallels exist, however, there are differences in the cognitive implications of coping with erroneous actions as opposed to system faults. Within the disturbance management paradigm, diagnostic search for the source of a fault is modeled as an abductive process, which may by preceded by or interwoven with intervention. Also, four basic modes of corrective responses to faults are identified: mitigate consequences, break propagation paths, terminate the source, and cleanup aftereffects. But it is unclear how 29

41 critical diagnosis is for the recovery of errors, where the focus is in managing the resulting disturbance as opposed to the identifying the underlying error, which is ephemeral, unlike a physical system fault or malfunction. 30

42 RESEARCH GOALS In summary, little empirical research has been performed to examine the processes and factors involved in the successful and poor handling of errors and disturbances, and few studies have done so in real-world dynamic environments (for some examples, see Kanse and Schaaf, 2001; Klinect et al., 1999; Wioland and Amalberti, 1998; Woods, 1984). The majority of work in this area has focused on error detection only, leaving unanswered questions about the other stages of error and disturbance management (i.e., explanation and recovery). It is also unclear whether the findings from research to date capture the entire range and effectiveness of possible recovery strategies. In complex and highly dynamic domains, such as aviation, operators need to coordinate their activities (including error recovery) with various human and machine agents, and they are often constrained by the continuously evolving context in which they operate. System coupling and interactions found in complex automation are likely to play a role but are usually absent from independent single-user contexts, in which erroneous actions tend to be reversible and do not propagate throughout the system. Given that the computerization of high-risk work domains (and everyday life) can be expected to increase and potentially lead to more and new difficulties with humanmachine collaboration, this research will address automation-related performance 31

43 breakdowns in the context of human-machine coordination (in particular, pilotautomation interaction on modern flight decks). Using a joint cognitive systems approach (Woods et al., 1994), the goal of the present research is to improve our understanding of error explanation and recovery strategies, and further inform models of disturbance management (and, as one subcategory, error management) in complex event-driven domains. More broadly, this work seeks to contribute to increased system safety by informing engineering and training interventions that address the problem of erroneous actions and assessments that overcome mismatches between the design of computerbased technology and the information needs and attentional resources of the human operator. Three specific areas of interest are outlined in the following sections. Explanation and recovery strategies. One of the goals of the present research was to examine the nature, feasibility, and desirability of on-line explanations of errors and disturbances in an event-driven environment. Earlier studies suggest that explanation does not necessarily occur or precede recovery during dynamic disturbance management. This finding may be explained by the fact that explanation or diagnosis is useful primarily in the context of knowledge-based performance, in which a novel or unexpected outcome requires the on-line development of a solution by the operator. Also, the limited time that is available for error explanation in most dynamic worlds is likely to prevent operators from engaging in this step before taking some action. Our goal was to determine whether, and under what circumstances, operators attempt to explain before 32

44 they respond, and to what extent engaging in explanation is required for successful disturbance handling. Another objective was to examine the range of recovery strategies that are supported and developed by operators in highly dynamic interactive domains, especially when they require, and are influenced by, the availability of complex and flexible automation technology (see below). Very little is known about operators basis for choosing among recovery options or about their effectiveness for various error types and task contexts. Furthermore, the potential need to coordinate recovery actions with other agents, such as air traffic controllers in the case of aviation, may play a role in how operators choose among, and persist with, recovery options. Impact of automation design and complexity on error explanation and recovery. An analysis of how technological tools shape the cognitive activities (diagnosis, hypothesis testing, planning, monitoring) of the error and disturbance management process seems to be missing from most earlier efforts. The observability of the human-computer interface the degree to which events, changes, and anomalies in the monitored process are effectively communicated to the operator will affect the timely detection of a mismatch between expected and actual system state and behavior (Rasmussen, 1985; Woods et al., 1994). Interface design and system functionality will also influence an operator s ability to explain a disturbance and identify and implement timely recovery measures. For example, predictive navigation displays that visualize the effects of pilot interventions can aid a pilot in mentally simulating and assessing the effectiveness of possible recovery actions to avoid an altitude violation. Another important question relates to the proposed generic recovery strategy of reverting to a lower level of automation when a problem is 33

45 encountered. This strategy has been advocated by the research community and by training departments of major airlines; however, it is not clear whether, and under what conditions, this strategy is actually being used and whether it is necessarily the most effective method. Modeling. Further extensions of disturbance management models will require empirical data related to operators handling of error-induced disturbances in event-driven domains. As mentioned earlier, there is little data on the nature of the explanation process, how it affects one s ability to recover from errors, and how operators choose, evaluate, and possibly revise a recovery strategy. It also appears to be useful to try to integrate the existing perspectives and knowledge bases on error management and the more general concept of disturbance management (Woods, 1994). 34

46 BRIEF OVERVIEW OF AUTOFLIGHT AND THE FLIGHT MANAGEMENT SYSTEM Prior to reporting the past research activities and the current work, both in the aviation domain, the reader requires a brief introduction to autoflight system used by pilots on highly automated aircraft, better known as the Flight Management System (FMS). The FMS supports a variety of functions on modern flight decks, primarily automatic flight path control. Two interfaces are used by the pilot to input information to the FMS: a) the mode control panel (MCP) and b) the control display units (CDUs; one for each pilot). The MCP is a tactical interface used for specifying short-term airspeed, vertical speed, altitude, and heading targets and to activate autoflight modes related to thrust (such as SPD), vertical navigation (such as VNAV and FLCH), and lateral navigation (such as LNAV and HDG SEL). The CDU is a more strategic interface that consists of small display screen, with an alphanumeric keyboard which that allows pilots to view and input flight information on a variety of different pages, including an entire flight plan (altitude and airspeed constraints for each waypoint in a route). After the FMS has been instructed via these two interfaces, the pilot can activate the autopilot, which will then track a programmed flight path according to the CDU or MCP targets, depending on the active modes. High-level modes (e.g., VNAV and LNAV) are coupled to the CDU route and thus fairly autonomous in the sense that they can track a series of waypoints along a 35

47 flight path without pilot intervention. Low-level modes (e.g., FLCH, V/S, HDG SEL) simply capture and hold individually-set MCP target. Information on the current and future status, targets, and behavior of the automation is presented on the CDU data display, the MCP target windows, the Primary Flight Display (PFD; also shows basic flight parameters, such as airspeed and altitude), and the map display, which depicts a plan view of the own aircraft and its future flight path. At the top of the PFD, Flight Mode Annunciations (FMAs) indicate active automation modes, any armed modes (those that will be triggered by future conditions, such as the passing of a waypoint or capturing an altitude level (Figure 5). / MCP CDU CDU Figure 5. Locations of pilot interfaces on the flight deck 36

48 METHODS RESEARCH ACTIVITIES TO-DATE The stated research goals have been pursued using a converging operations approach in order to benefit from the advantages of various methodologies and compensate for their respective shortcomings. Several research activities have been completed as part of this research program. These activities and their findings will be discussed briefly in the following section since they have helped inform the final phase of the research program a simulation study of disturbance management on modern flight decks which is the focus of this document and which will be described in detail. Instructor survey A survey of instructor pilots at a major U.S. carrier was conducted in order to learn about error types and error management strategies on glass cockpit aircraft (Nikolic and Sarter, 2003). Instructors were targeted because they have taught and observed a wide range of pilot behavior and performance and because they are less likely to try to defend, rather than describe, observed actions. Nevertheless, it is important to keep in mind that the survey respondents were a self-selected group and thus may have their own reporting biases and agendas. The survey results confirmed earlier findings, including the 37

49 observation that lapses (errors of omission) tend to be most difficult to detect. They often involve incomplete action sequences, usually due to the omission of the final step in a sequence of automation commands. For example, forgetting to engage an automation mode after selecting a new altitude or speed target is one such type of command omission that was reported repeatedly (see post-completion errors in Byrne and Bovair, 1997; Gray, 2000). Also, pilots appear to have difficulties detecting unwanted changes in automation status and behavior that occur due to system coupling (see Sarter and Woods, 1997, 2001). For example, reprogramming the arrival runway in the flight management interface causes the possibly unwanted loss of other pre-programmed information related to speed and altitude constraints on the descent. Third, error detection is very difficult in cases that involve partial confirmation, i.e., cases that involve a command omission which is not detected because it is masked by seemingly appropriate aircraft behavior. For example, forgetting to engage a computer-managed navigation mode can be masked if the aircraft is flying along its programmed route but does so in a heading hold mode. In terms of error explanation, a majority of survey responses indicated that flight crews tend to forgo this step. Some responses explicitly suggested that taking any action (not necessarily an informed one) is preferred to explanation. Nevertheless, when asked in general terms whether explanation makes recovery easier, the instructors responses were split evenly. Instructors were asked what factors seem to affect pilots decision to engage in, or skip, the error explanation stage. Factors such as confidence and personality, system knowledge, and pilots experience were considered relevant. Contextual factors such as 38

50 time pressure and workload demands related to phase of flight were also cited by instructors as determining whether pilots engage in error explanation. Instead, pilots engage in recovery actions immediately following error detection. It appears from the survey data that the highly dynamic and interactive nature of aviation makes forward recovery the only feasible option in most cases. Moving forward is achieved through a number of different activities such as the repetition of a command, pushing buttons until something works, trying a different method for accomplishing the task, or simply muddling through. Even though we asked for this information, it is not clear from the instructors responses when or why these different strategies are used. However, responses suggest that recovery relies on workarounds rather than on system knowledge related to the error itself, which could be related to the absence of and/or inability to explain and diagnose errors. Incident report analysis In order to supplement the survey data with insights from actual flight operations, a detailed review was conducted of 38 (selected from an initial review of 935) Aviation Safety Reporting System (ASRS) reports that involved the management of automationrelated errors on modern flight decks. These reports were analyzed with respect to error type as well as error detection, explanation, and recovery. Lapses and mistakes were the most frequently reported error types in these reports. This is likely due to the trend for slips (which tend to be the most frequent error type in terms of absolute numbers) to be detected more readily and corrected before leading to a problem worth reporting. In the 39

51 majority of cases, error explanation was not mentioned. It is not clear whether pilots simply did not include any information on this stage, or whether/when they actually moved directly from the detection to the recovery of an error. As in the survey data, forward recovery was the predominant recovery strategy in the database. Although such near-miss incident reports have been suggested as the optimal source of data in order to study recovery (Schaaf and Kanse, 2000), the loose coupling between process and outcome suggests that this may not be the case, since situational factors are often the determinants of negative outcomes, given the same action. Furthermore, uncertainties about the completeness and accuracy of these reports limit their use as exclusive and reliable sources of operator performance. Modeling work The findings from the instructor survey and the incident report analysis highlighted the need for a revised model of error recovery. Current models of error recovery are based on analytical work and a small number of software usability studies in domains that involve individual users performing self-paced tasks on a computer. These models do not necessarily capture the range of possible error recovery strategies in highly dynamic domains, such as aviation, where numerous agents interact and collaborate on an overall goal. Therefore, a revised model (Figure 6) has been proposed that uses a puzzle analogy to illustrate why forward recovery to a revised goal state is often the only option in highly interactive environments. In these domains, an initially intended state (such as a specific airplane position) is part of an overall intended configuration (such as a landing 40

52 sequence). In case of an erroneous action, it is often neither possible nor desirable to recover backwards to the initial state (before the erroneous action) since the overall configuration has changed. Instead, a new, alternate goal state must be identified and achieved (e.g., a new slot in the landing sequence must be assigned to the airplane). Modeling efforts have raised questions about whether and how recovery is influenced by explanation, other contexts, and automation technology, and whether recovery can be modeled as an iterative (cyclical), rather than linear, process. time (0) time (1) time (n) Initial state Erroneous state Correct action Erroneous action Alternate state Forward recovery Intended config Incompatible config Compatible config Figure 6. Proposed context-sensitive recovery model illustrates a revised system state is needed to match the changed configuration Simulator Study As the final step in a research program that included jump-seat observations, a flight instructor survey, and an incident database analysis, a high-fidelity simulator study was conducted with type-rated airline pilots in order to examine error and disturbance management in a semi-controlled full-mission flight simulation context. 41

53 Participants Pilot volunteers were recruited from two major U.S. carriers and one airplane manufacturer. Twelve type-rated Boeing pilots (11 current, 1 recently retired) participated in the study (see Table 2). Volunteers were paid $100 for their participation Flight hours Pilot Airline CAPT FO Total 1 A A B B A B A C A B C C Mean = SD = Table 2. Participant flight hours Simulator The simulation was conducted on a fixed-base Boeing flight simulator. The Boeing is a highly automated four-engine long-haul passenger aircraft. The simulator was equipped with fully functional displays and control interfaces. An Evans & Sutherland ESIG 3350 image generation system rendered a panoramic out-of-window 42

54 visual scene which covered 45 horizontally and 34 vertically for each pilot (see Figure 14). Procedure After briefing the flight with the experimenter and reviewing all flight-related paperwork, the participating pilot joined the confederate pilot in the simulator. The confederate served in the role of pilot-not-flying. He occupied the right (co-pilot) seat and helped ensure that scenario events occurred as designed. The confederate pilot was instructed to not be overly proactive in helping participating pilots detect their errors. However, he was instructed to intervene (by directing the participant s attention) if the detection delay jeopardized the experimenter s likelihood to observe a recovery. The confederate was also asked to elicit pilots reasoning about problems by asking relevant questions to expose the pilot s intentions and reasoning. This approach made sure that the participating pilot would have to engage in diagnosis and recovery in all cases where a disturbance occurred. Interactive air traffic control was provided by the experimenter/observer to help ensure the proper evolution of the scenario by issuing planned and improvised clearances (Appendix A). After reviewing the planned route and the current state of the aircraft, the scenario began in-flight with the aircraft level at 9000 feet, during the initial climb-out phase. By eliminating pre-flight preparations that were not relevant to the events in the scenario, the overall length of the flight was reduced to one hour. The scenario ended once the aircraft landed at Los Angeles and came to a complete stop on the runway. The pilot then 43

55 remained in the simulator cab and was debriefed by the experimenter for another minutes. Scenario All participants flew the same one-hour daytime scenario from San Francisco to Los Angeles in the role of pilot-in-command, with a confederate pilot who knew the purpose of the study and helped the participant complete the scenario in the role of co-pilot without creating any difficulties for him. Weather throughout the scenario was clear with minimal winds. Based on data gathered from our earlier survey, observations, and consultations with domain experts, several scenario events were designed that created a high probability of creating automation-related errors and disturbances. Of specific interest was the elicitation of omissions (lapses) and knowledge-based mistakes resulting from novel situations and/or incomplete understanding of automated system functions. Since errors and disturbances were not introduced through experimenter-induced system failures or unrealistic clearances, they were not necessarily observed for each pilot on each event. Figure 7 shows the scenario route overview and profile, with approximate locations of event triggers (see also Appendix A for more detail). Details of each event are described in the following section. 44

56 HUNDA LIMMA JAVSI SMO Event 5! BAYST SADDE SYMON FIM REYES Event 4! T/D DERBB AVE Event 3! T/C WAGES Event 2! Event 1! PESCA PORTE Figure 7. Overview of the scenario showing waypoints on the flight plan. Events occurred between PORTE and SMO. Scenario Events Event 1: LNAV Overlap. The scenario began at an altitude of 9000 feet over the PORTE waypoint. The aircraft was positioned directly over the route that had been programmed into the FMC and that was represented by a solid magenta line on the navigation display. In order for the airplane to follow this route, the autopilot would need to be engaged in 45

57 the LNAV (lateral navigation) mode. However, for this scenario, the autopilot had been engaged in the HDG SEL (heading select) mode which was set to follow a 137 degree heading. Thus, the navigation display gave the appearance that the aircraft was following the programmed route when, in fact, it was going to follow the selected heading and miss the subsequent left turn to the next waypoint on the route. Pilots had to recognize that they were in the incorrect mode (as indicated on the PFD) and needed to engage the LNAV mode prior to reaching the next waypoint (PESCA). This situation was created to elicit a mode error due to the presence of feedback that partially confirmed the expected autopilot behavior. Event 2: PESCA Climb. Within moments of starting the scenario, the pilot was given a clearance by ATC to cross the next waypoint (PESCA) at an altitude of for traffic separation. This clearance was 3000 feet higher than they had anticipated according to the flight plan. This new altitude restriction created a very difficult climb profile for the aircraft, especially since a reduced engine thrust setting was active (CLM2 derate) which limited the maximum climb performance and made it difficult for the aircraft to achieve the required altitude in the short time and distance available. Pilots had been informed about the reduced thrust setting during the briefing, and a corresponding indication appeared on a display on the center pedestal. Several strategies were available to the pilot to cope with this disturbance; in all cases, the rapid removal of the thrust derate was critical. This event was introduced to examine whether pilots would quickly understand the performance challenge that this clearance posed, how they would comply with the restriction, and whether they would remember or notice that the thrust derate was active. 46

58 Event 3: LNAV Capture. After crossing PESCA, ATC instructed the aircraft to continue on a 140 degree heading instead of turning left to continue on the flight plan. This clearance results in a course that is parallel to, but about 30 nautical miles offset from, the programmed route. As a result, the aircraft will not cross the next two waypoints that are programmed into the Flight Management Computer (FMC). A side-effect of this particular course deviation is that the circumnavigated waypoints that are programmed into the CDU do not drop, but are kept in the route. Thus, if the route is not reprogrammed by the pilot, the autopilot will attempt to return to these waypoints and result in unwanted aircraft behavior when the pilot attempts to rejoin the course by activating the LNAV mode (Figure 8). AVE Engaging LNAV without reprogramming the course leads to an incorrect intercept path T/C WAGES ATC instructs pilot to turn left to a 070 heading, and resume the course to AVE Two waypoints in the FMS route are bypassed by the deviation ATC instructs pilot to deviate from course PESCA Figure 8. LNAV Event 47

59 Event 4: VNAV ALT mode. On an automated descent, the Flight Management System creates an idle-power descent profile, which defines a point on the aircraft s route where the descent must begin the so-called top-of-descent (TOD) point in order to meet programmed altitude targets. In order to begin the descent on its own, the automation must be in the VNAV PTH mode. However, in our scenario, the automation was likely to enter the VNAV ALT mode due to cruise altitude changes given by air traffic control. If the pilot does not actively change the mode back to VNAV PTH (typically, by changing the cruise altitude in the CDU interface of the Flight Management System and then pushing the altitude knob) (Figure 9), the aircraft will not descend at the TOD as expected, and may potentially miss an altitude target. This event could elicit a mode error due to either incomplete system knowledge or a monitoring breakdown, and could have resulted in an altitude violation if it was not detected and corrected in a timely manner. 48

60 During Climb VNAV SPD IF (MCP altitude) = (FMS cruise altitude) at MCP altitude VNAV PTH Push MCP altitude knob at TOD aircraft descends IF (MCP altitude) < (FMS cruise altitude) Change FMS cruise altitude to match MCP altitude VNAV ALT aircraft does not descend Figure 9. VNAV mode transition diagram Event 5: Descent restrictions. As the aircraft descended toward the airport, ATC requested that the aircraft meet several crossing restrictions at various waypoints which were near the limits of the aircraft s performance capabilities (Figure 10). Pilots had a choice of techniques for descending the aircraft (including the use of different levels of automation) and these were examined to determine which factors led them to have difficulty in making the restrictions and which strategies they used to recover the descent path. 49

61 top-of-descent point FIM at SYMON at SADDE at BAYST SMO between Figure 10. Vertical profile view of descent restrictions In summary, these scenario events were chosen because of their likely ability to place heavy knowledge and attentional demands on pilots resulting in the potential for breakdowns in human-machine communication and coordination. These breakdowns could then require pilots to detect, explain, and recover from associated disturbances to the flight. Data collection and analysis A combination of methods was used to gather performance and process data from participants. During the scenario, the simulator automatically recorded flight parameters such as automation status and control inputs made to the flight deck interfaces. Multiangle video and audio recordings were made to assist in recreating verbal and behavioral protocols. This information was supplemented by an observer, who sat directly behind the pilots in the simulator cab and tracked pilot responses to events using a form developed for this purpose (see Appendix B). When necessary, the confederate pilot 50

62 engaged pilots in discussions during the various scenario events to externalize, and help track their thought processes, goals and intentions, and attentional focus. Upon completion of the scenario, the participating pilot was debriefed by the experimenter in order to review and clarify any ambiguities about his scenario performance. In addition, portions of the debriefing were used to probe participants knowledge of the automated flight system in relation to their performance. This information was used to determine whether pilots possessed accurate mental models of system behavior which could be related to observed explanation or recovery strategies. These sources of data were combined to form a coherent process trace (Woods, 1993) of participant behavior which can be compared across participants as well as to canonical or standard recovery paths for each event. 51

63 RESULTS All twelve pilots completed the scenario for this study successfully in the sense that they all made a safe landing. However, every pilot struggled at some point with handling events during the simulated flight (Table 3), and every scenario unfolded in a unique way because pilots used a variety of strategies for managing events and recovering from disturbances. Performance on each of the scenario events is detailed in the following sections. Note that information on Event 1 will be presented briefly at the end of the Results section since it did not result in any errors or disturbances. When possible, a canonical solution path was defined by a subject matter expert (SME) for the event. This path represents the most efficient but not necessarily the only correct or successful sequence of pilot actions for the event. It provides a single frame of reference from which to compare performance across pilots. A trace of each individual pilot s actual event handling was constructed based on pilot actions, unsolicited comments, and their dialogue with the confederate pilot. Finally, the individual traces were combined into an overall composite trace for each event in an attempt to identify patterns, performance trends and strategies, and their possible relation to the event outcome. 52

64 Event Experienced Disturbance? Overall Success Rate Engaged in explanation? Followed canonical path? Recovery Method Knowledge gaps? CLIMB 12 0% high-level auto 2 low-level auto 1 manual N/A LNAV 7 100% repeat 7 VNAV ALT 10 70% early descents 2 reset 1 repeat 1 trial and error 2 late canonical 8 DESCENTS 12 17% 0 N/A 5 used external agent N/A Table 3. Summary of scenario performance EVENT 2: PESCA CLIMB RECOGNIZING AND ADAPTING TO PERFORMANCE LIMITS This event required pilots to quickly recognize that an air traffic controller s modification of their original climb clearance was going to be difficult to achieve. During the climb to feet, pilots were instructed to cross an intermediate waypoint (a fix called PESCA ) at feet instead of feet. This expedited climb to a higher-thanexpected altitude was difficult to attain given that the aircraft was initialized with a lower thrust setting, called a derate (an engine performance limit which is often used in reallife operations to save fuel). This event represented a time-critical situation such that any delay in identifying and adopting the correct strategy for meeting the new altitude constraint made it increasingly difficult to achieve the goal as the distance between the aircraft and the next waypoint decreased rapidly. 53

65 Composite PESCA CLIMB V/S VNAV SPD disengaged autopilot Deviation = 2000 ft Deviation > 2000 ft dial engage FLCH vertical mode reduce speed remove derate Cross PESCA at feet Canonical path increased speed Pilot Color Legend Figure 11. Composite of abstracted solution paths for PESCA Climb event. Each color corresponds to one pilot. As shown in Figure 11, the canonical path for this event involved three actions after dialing the target altitude: 1) using a vertical navigation mode at a lower level of automation, such as FLCH or V/S, which offer more direct control of the aircraft s climb parameters, 2) reducing the commanded airspeed, which increases the climb rate, and 3) removing the thrust derate that was set at the beginning of the scenario to improve the aircraft s climb performance. The sequence in which these steps were taken was not critical. 54

66 None of the pilots in the study followed the canonical path, and none succeeded in meeting the climb restriction (Figure 11). Some pilots recognized, and commented early on, that the restriction would be difficult to meet. They all attempted a variety of actions for climbing the aircraft to the required altitude. One important finding was that the majority of pilots relied heavily on a high-level automation mode (VNAV SPD) to recover the required climb profile rather than revert to lower-level modes. Nine pilots completed the climb using the VNAV SPD mode, which commands a fuel efficient vertical profile, resulting in a lower-than-possible climb angle. The other three pilots used strategies that allowed faster and more direct aircraft control. Pilot 3 (see Figure 12), upon hearing the clearance, quickly calculated that the new climb could not be accomplished in the remaining distance, and informed ATC that he was not able to comply. As in all other cases, ATC responded that he should make his best effort to reach the higher altitude. In response, Pilot 3 simply disengaged the autopilot, and manually raised the nose of the aircraft in order to quickly increase the climb rate. In the debriefing, Pilot 3 explicitly stated that he disengaged the autopilot for expediency and quicker aircraft response, in contrast to using the VNAV mode which was sluggish and could potentially produce unexpected speed changes or level-offs. 55

67 PILOT 3 PESCA CLIMB 16 uh not sure that we can make that. How far is PESCA? 9 miles? Can t do it. Tell them we can t comply. Commanded nose up to 10kts above clean maneuver speed for maximum rate (said it was better than max angle) Later commented that he chose to fly manual for expediency and quicker aircraft response disengaged autopilot deviation Actual path dial push FLCH or V/S reduce speed remove derate? crossed 16000A Canonical path Figure 12. Solution path for Pilot 3 in the PESCA Climb Event Pilot 4 (see Figure 13), like all pilots, began handling the event using the VNAV SPD mode. He quickly began to doubt his ability to meet the clearance and also began mental calculation to determine whether the aircraft s current performance would allow it to reach the goal. The calculation revealed that the current climb rate (i.e., vertical speed) was insufficient, which caused Pilot 4 to increase this parameter directly by switching to the Vertical Speed (V/S) mode a lower level of automation that mapped directly to his intentions. Perhaps realizing that this strategy came too late and would still not be successful, he informed ATC. Note that both of these pilots still crossed the PESCA waypoint below the required feet; however, due to their fast response and the use of lower levels of automation, they came closer to meeting the altitude constraint than 8 other pilots. 56

68 PILOT 4 And we hit VNAV. PESCA CLIMB And we re climbing no we re not. LNAV is engaged and VNAV SPD, we got that. We ve got CLM2 We want to cross PESCA we are not gonna.. I think we re gonna make it but let s see, we ve got 7 miles, we ve got 5000 ft, were climbing 2000 fpm, 3000 fpm is best that s a minute and a half, 22 miles Let s go Vertical Speed, I m gonna just crank it up here so we can make our crossing restriction. And I m gonna to, trade off speed for altitude, or altitude for speed I should say. You might want to ask them for relief pushes alt knob alt knob deletes PESCA restriction of 13000A deviation Actual path dial push V/S reduce speed remove derate? crossed 16000A Canonical path Figure 13. Solution path for Pilot 4 in the PESCA Climb Event Pilot 12 was slower to recover from the inadequate climb performance of the VNAV SPD mode. He detected a problem with his initial strategy when he observed the system message UNABLE NEXT ALT displayed on the CDU interface. 2 This message is generated by the FMS when it detects that the programmed altitude target for the approaching waypoint cannot be met, and it appeared for all pilots in this event. However, the location of the CDU (adjacent to the pilot s knee) makes it challenging to notice this text message in a timely manner (if at all), since the pilot is typically monitoring the PFD, the ND, and the MCP during this event, which are all in the forward field of view. After seeing this message, Pilot 12 engaged the FLCH (Flight Level Change) mode and reduced his airspeed as prescribed by the canonical path. However, because the remaining step of the canonical path removing the thrust derate was 2 In addition to appearing on the scratchpad at the bottom of the CDU screen, a MSG annunciator light illuminates on the CDU and an FMC MESSAGE alert is displayed on the central EICAS screen. All of these cues would typically be in the pilot s visual periphery during this event. 57

69 omitted, in addition to the delay in initiating this recovery, the aircraft missed the altitude target by 3000 feet. Interestingly, only 4 of the 12 pilots (2, 9, 10, and 11) noticed and removed the thrust derate, that was armed at the onset of the scenario, without the assistance of the confederate pilot. 3 Two of these pilots (2 and 11) executed this step early on in the event and came the closest to the required altitude (within 1000 feet). By comparison, the rest of the pilots in the scenario missed the target altitude by ft. Pilot 10 removed the derate but also increased his speed for a portion of the event (as described below), which reduced the climb rate. The reason why most pilots neglected to remove the thrust derate was probably a combination of both inert knowledge (pilots had been informed prior to the scenario that the derate was set) and the poor feedback that is provided to pilots about the derate. This feedback consists of the letters CLB2 displayed in small green text embedded within the engine indication (known as the EICAS) display, which is located away from the main displays on which pilots were more likely to focus during this event (see Figure 14). 3 Although Pilot 1 also removed the derate, it was at the suggestion of the confederate pilot. Four other pilots removed the derate proactively, without the confederate pilot s cueing. 58

70 Figure 14. EICAS display (shown left) with CLB (Climb) thrust armed (white arrow). This screen is located in the center of the instrument panel (shown right). Finally, it is interesting to note that two pilots who did not follow the canonical path for this event also took an action that exacerbated the problem. Rather than reduce their speed in order to improve their climb rate, these two pilots (7 and 10) increased their airspeed at some point during the climb, and consequently each missed the target altitude by about 3000 feet. Pilot 7 seemed to have taken this action due to a misconception about the FMS functionality. His intention was apparently to lower his speed, as evidenced by his deliberately asking for and dialing in the maximum angle speed (a value which is FMS-calculated and needs to be found by navigating through pages in the CDU interface). However, soon after finding and reducing the speed, he pushed the speed button, which increased the speed back up to the original value. Upon seeing the speed increase, he said, Why is it doing that? and subsequently intervened to lower the speed again (Figure 15 below). He apparently did not understand that this action switches the aircraft s speed target to a higher default value (ECON speed) which would be typical in 59

71 this phase of flight. In other words, he was unable to determine how to instruct the automation according to his intentions (bridging the gulf of execution). PILOT 7 PESCA CLIMB It s not going to make PESCA at or above 16000, I know that Why is it doing that? What s our weight? It s just not climbing. Asks for VNAV page and max angle speed alt knob deleted PESCA restriction of 16000A programs 16000A pushes alt knob pushes VNAV closes spd window deviation Actual path dial push FLCH or V/S reduce speed remove derate? crossed 16000A Canonical path Figure 15. Solution path for Pilot 7 in the PESCA Climb Event Pilot 10 also searched for the maximum angle speed value in the CDU, but instead of reducing his speed accordingly, he dialed the speed up to the higher FMS default target value which was also displayed on the same screen as the maximum angle speed of 280 kts (see Figure 16). It is possible that this pilot misread the value from the FMS screen (slip), and entered the higher ECON speed value (320 kts), which was counterproductive to his goal of increasing his climb rate (a mistake). 60

72 CRZ ALT FL350 ACT ECON CLB 1/3 ECON SPD 320/.800 SPD TRANS TRANS ALT 280/ SPD RESTR MAX ANGLE ---/ Figure 16. A schematic of the CDU 'ECON CLM' page, showing the FMS-commanded ECON speed and the MAX ANGLE speed values EVENT 3: LNAV CAPTURE UPDATING STALE INFORMATION This event examined how pilots recovered their original course after an air traffic control clearance caused them to bypass two of the waypoints on the original route. One of these waypoints was a so-called floating waypoint, i.e., it did not correspond to a geographic position like most other waypoints, but rather only specified a turn toward 090 degrees. The second waypoint corresponded to the geographical fix known as WAGES (see Figure 8 and 19). The clearance was designed to produce a side effect such that the FMS continued to consider the bypassed waypoints as active (i.e., as valid targets) since the airplane never came close enough for them to be removed by the automation s logic. As a result, pilots who re-activated LNAV to resume the course without first modifying the 61

73 route in the CDU caused the airplane to turn off-course to a 090 heading (to the active floating waypoint) instead of a 070 heading, as instructed by ATC. All pilots eventually managed to recover their course, although minor deviations occurred for two pilots (see Figure 17). After receiving the ATC clearance to intercept their normal course via a 070 heading, all pilots made the initial turn using the HDG SEL mode (a lateral mode at a low level of automation; Figure 17, A). The recovery processes from that point on fall into two categories. One group of five pilots reprogrammed the route prior to engaging the LNAV mode (represented on the lower half of Figure 17). A second group of seven pilots activated the LNAV mode without updating the original route in the FMS. Pilots in the first group realized that the ATC deviation produced a mismatch between the currently active FMS route and the updated route of the aircraft, based on the feedback displayed on both the map display (Figure 18) and the LEGS page of the CDU (Figure 19). These interfaces showed that the active waypoint was either the floating waypoint, or WAGES, when it needed to be AVE in order to comply with the current clearance. Upon noticing this mismatch, these pilots manually removed the bypassed waypoints from the CDU. 62

74 composite LNAV CAPTURE dials hdg to course C D E pushes LNAV pushes HDG SEL pushes LNAV programs new intercept course minor deviation A F G dials 070 hdg programs new intercept course pushes LNAV on course Canonical path B dials hdg to course Figure 17. Composite of abstracted solution paths for LNAV event. Each line represents one pilot and is color-coded for all pilots. 63

75 Figure 18. Example of a Navigation (Map) Display 090 (INTC) 116 WAGES 118 AVE 131 DERBB 131 REYES ACT RTE 1 LEGS 11 NM 117 NM 29 NM 44 NM 325 / FL / FL / FL / FL / FL RTE DATA > Figure 19. CDU 'LEGS' page as it would appear during LNAV Capture Event. 64

To conclude, a theory of error must be a theory of the interaction between human performance variability and the situational constraints.

To conclude, a theory of error must be a theory of the interaction between human performance variability and the situational constraints. The organisers have provided us with a both stimulating and irritating list of questions relating to the topic of the conference: Human Error. My first intention was to try to answer the questions one

More information

STPA-based Model of Threat and Error Management in Dual Flight Instruction

STPA-based Model of Threat and Error Management in Dual Flight Instruction STPA-based Model of Threat and Error Management in Dual Flight Instruction Ioana Koglbauer Graz University of Technology, Austria koglbauer@tugraz.at Threat and Error Management (TEM) Currently the Authorities

More information

Common Procedural Execution Failure Modes during Abnormal Situations

Common Procedural Execution Failure Modes during Abnormal Situations Common Procedural Execution Failure Modes during Abnormal Situations Peter T. Bullemer Human Centered Solutions, LLP Independence, MN Liana Kiff and Anand Tharanathan Honeywell Advanced Technology Minneapolis,

More information

Causation, the structural engineer, and the expert witness

Causation, the structural engineer, and the expert witness Causation, the structural engineer, and the expert witness This article discusses how expert witness services can be improved in construction disputes where the determination of the cause of structural

More information

Studying the psychology of decision making during unstable approaches and why go-around policies are ineffective.

Studying the psychology of decision making during unstable approaches and why go-around policies are ineffective. Failure to Mitigate BY J. MARTIN SMITH, DAVID W. JAMIESON AND WILLIAM F. CURTIS Studying the psychology of decision making during unstable approaches and why go-around policies are ineffective. The Flight

More information

Research Review: Multiple Resource Theory. out in multi-task environments. Specifically, multiple display layout and control design

Research Review: Multiple Resource Theory. out in multi-task environments. Specifically, multiple display layout and control design Research Review: Multiple Resource Theory Relevance to HCI Multiple resource theory is a framework for predicting effects on performance when multiple tasks are concurrently executed. These predictions

More information

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1 INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1 5.1 Clinical Interviews: Background Information The clinical interview is a technique pioneered by Jean Piaget, in 1975,

More information

Emergency Descent Plans, Procedures, and Context

Emergency Descent Plans, Procedures, and Context From: HCI-02 Proceedings. Copyright 2002, AAAI (www.aaai.org). All rights reserved. Emergency Descent Plans, Procedures, and Context Amy Pritchett* and Jennifer Ockerman *Schools of Industrial and Systems

More information

MedXellence Medical Incident Analysis: Group Exercise. Guidance Materials for the Patient Safety Video Presentation

MedXellence Medical Incident Analysis: Group Exercise. Guidance Materials for the Patient Safety Video Presentation MedXellence Medical Incident Analysis: Group Exercise Guidance Materials for the Patient Safety Video Presentation In this exercise you will observe a clinical/administrative sequence of events that that

More information

ISA 540, Auditing Accounting Estimates, Including Fair Value Accounting Estimates, and Related Disclosures Issues and Task Force Recommendations

ISA 540, Auditing Accounting Estimates, Including Fair Value Accounting Estimates, and Related Disclosures Issues and Task Force Recommendations Agenda Item 1-A ISA 540, Auditing Accounting Estimates, Including Fair Value Accounting Estimates, and Related Disclosures Issues and Task Force Recommendations Introduction 1. Since the September 2016

More information

Semiotics and Intelligent Control

Semiotics and Intelligent Control Semiotics and Intelligent Control Morten Lind 0rsted-DTU: Section of Automation, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark. m/i@oersted.dtu.dk Abstract: Key words: The overall purpose

More information

Contextual Control Model (COCOM)

Contextual Control Model (COCOM) Contextual Control Model (COCOM) The information processing approach to the modelling of cognition uses a set of elements or structures of the human information processing system as the basic building

More information

An Overview of Human Error

An Overview of Human Error An Overview of Human Error Drawn from J. Reason, Human Error, Cambridge, 1990 Aaron Brown CS 294-4 ROC Seminar Outline Human error and computer system failures A theory of human error Human error and accident

More information

5.8 Departure from cognitivism: dynamical systems

5.8 Departure from cognitivism: dynamical systems 154 consciousness, on the other, was completely severed (Thompson, 2007a, p. 5). Consequently as Thompson claims cognitivism works with inadequate notion of cognition. This statement is at odds with practical

More information

March Paul Misasi, MS, NREMT P Sedgwick County EMS. Sabina Braithwaite, MD, MPH, FACEP, NREMT P Wichita Sedgwick County EMS System

March Paul Misasi, MS, NREMT P Sedgwick County EMS. Sabina Braithwaite, MD, MPH, FACEP, NREMT P Wichita Sedgwick County EMS System March 2012 Paul Misasi, MS, NREMT P Sedgwick County EMS Sabina Braithwaite, MD, MPH, FACEP, NREMT P Wichita Sedgwick County EMS System User s Manual Page Intentionally Left Blank Wichita Sedgwick County

More information

CANONICAL CORRELATION ANALYSIS OF DATA ON HUMAN-AUTOMATION INTERACTION

CANONICAL CORRELATION ANALYSIS OF DATA ON HUMAN-AUTOMATION INTERACTION Proceedings of the 41 st Annual Meeting of the Human Factors and Ergonomics Society. Albuquerque, NM, Human Factors Society, 1997 CANONICAL CORRELATION ANALYSIS OF DATA ON HUMAN-AUTOMATION INTERACTION

More information

Inventory Research agenda setting

Inventory Research agenda setting The health research system is responsible for generating knowledge through scientific inquiry focusing on people s health and well being. Health research spans the entire range from (bio)medical research,

More information

Assignment 4: True or Quasi-Experiment

Assignment 4: True or Quasi-Experiment Assignment 4: True or Quasi-Experiment Objectives: After completing this assignment, you will be able to Evaluate when you must use an experiment to answer a research question Develop statistical hypotheses

More information

Where No Interface Has Gone Before: What Can the Phaser Teach Us About Label Usage in HCI?

Where No Interface Has Gone Before: What Can the Phaser Teach Us About Label Usage in HCI? Where No Interface Has Gone Before: What Can the Phaser Teach Us About Label Usage in HCI? Franklin P. Tamborello, II Phillip H. Chung Michael D. Byrne Rice University Department of Psychology 61 S. Main,

More information

Eurocopter Maintenance Failure:

Eurocopter Maintenance Failure: Dickemore 1 Maya Dickemore PILT 1010 AM 15 September 2013 NTSB Report Eurocopter Maintenance Failure: Lack of sufficient sleep, a sudden change in work hours, and working without clearly written work instructions

More information

Models of Information Retrieval

Models of Information Retrieval Models of Information Retrieval Introduction By information behaviour is meant those activities a person may engage in when identifying their own needs for information, searching for such information in

More information

ASD and LRFD of Reinforced SRW with the use of software Program MSEW(3.0)

ASD and LRFD of Reinforced SRW with the use of software Program MSEW(3.0) ASD and LRFD of Reinforced SRW with the use of software Program MSEW(3.0) Dov Leshchinsky [A slightly modified version of this manuscript appeared in Geosynthetics (formerly GFR), August 2006, Vol. 24,

More information

The Standard Theory of Conscious Perception

The Standard Theory of Conscious Perception The Standard Theory of Conscious Perception C. D. Jennings Department of Philosophy Boston University Pacific APA 2012 Outline 1 Introduction Motivation Background 2 Setting up the Problem Working Definitions

More information

Dynamic Control Models as State Abstractions

Dynamic Control Models as State Abstractions University of Massachusetts Amherst From the SelectedWorks of Roderic Grupen 998 Dynamic Control Models as State Abstractions Jefferson A. Coelho Roderic Grupen, University of Massachusetts - Amherst Available

More information

Using Your Brain -- for a CHANGE Summary. NLPcourses.com

Using Your Brain -- for a CHANGE Summary. NLPcourses.com Using Your Brain -- for a CHANGE Summary NLPcourses.com Table of Contents Using Your Brain -- for a CHANGE by Richard Bandler Summary... 6 Chapter 1 Who s Driving the Bus?... 6 Chapter 2 Running Your Own

More information

Respect Handout. You receive respect when you show others respect regardless of how they treat you.

Respect Handout. You receive respect when you show others respect regardless of how they treat you. RESPECT -- THE WILL TO UNDERSTAND Part Two Heading in Decent People, Decent Company: How to Lead with Character at Work and in Life by Robert Turknett and Carolyn Turknett, 2005 Respect Handout Respect

More information

Chapter 02 Developing and Evaluating Theories of Behavior

Chapter 02 Developing and Evaluating Theories of Behavior Chapter 02 Developing and Evaluating Theories of Behavior Multiple Choice Questions 1. A theory is a(n): A. plausible or scientifically acceptable, well-substantiated explanation of some aspect of the

More information

THOUGHTS, ATTITUDES, HABITS AND BEHAVIORS

THOUGHTS, ATTITUDES, HABITS AND BEHAVIORS THOUGHTS, ATTITUDES, HABITS AND BEHAVIORS Ellen Freedman, CLM Law Practice Management Coordinator Pennsylvania Bar Association I ve been thinking a lot lately about how we think, what we think, and what

More information

C/S/E/L :2008. On Analytical Rigor A Study of How Professional Intelligence Analysts Assess Rigor. innovations at the intersection of people,

C/S/E/L :2008. On Analytical Rigor A Study of How Professional Intelligence Analysts Assess Rigor. innovations at the intersection of people, C/S/E/L :2008 innovations at the intersection of people, technology, and work. On Analytical Rigor A Study of How Professional Intelligence Analysts Assess Rigor Daniel J. Zelik The Ohio State University

More information

COGNITIVE ERGONOMICS IN HIGHLY RELIABLE SYSTEMS

COGNITIVE ERGONOMICS IN HIGHLY RELIABLE SYSTEMS Università degli studi di Genova Facoltà di Scienze della Formazione COGNITIVE ERGONOMICS IN HIGHLY RELIABLE SYSTEMS Fabrizio Bracco*, Luciano Pisano #, Giuseppe Spinelli* *University of Genoa, Dept. of

More information

What is Relationship Coaching? Dos and Don tsof Relationship Coaching RCI Continuing Education presentation

What is Relationship Coaching? Dos and Don tsof Relationship Coaching RCI Continuing Education presentation What is Relationship Coaching? Dos and Don tsof Relationship Coaching RCI Continuing Education presentation David Steele and Susan Ortolano According to the International Coach Federation professional

More information

Chronic Fatigue Syndrome (CFS) / Myalgic Encephalomyelitis/Encephalopathy (ME)

Chronic Fatigue Syndrome (CFS) / Myalgic Encephalomyelitis/Encephalopathy (ME) Chronic Fatigue Syndrome (CFS) / Myalgic Encephalomyelitis/Encephalopathy (ME) This intervention (and hence this listing of competences) assumes that practitioners are familiar with, and able to deploy,

More information

When communication breaks down. The case of flight AF 447

When communication breaks down. The case of flight AF 447 When communication breaks down The case of flight AF 447 Alexander Eriksson 11 February 2015 Aviation is a domain where communication breakdowns is regarded as a serious threat to safety Safety Science,

More information

Five Mistakes and Omissions That Increase Your Risk of Workplace Violence

Five Mistakes and Omissions That Increase Your Risk of Workplace Violence Five Mistakes and Omissions That Increase Your Risk of Workplace Violence Submitted By Marc McElhaney, Ph.D. Critical Response Associates, LLC mmcelhaney@craorg.com As Psychologists specializing in Threat

More information

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity Measurement & Variables - Initial step is to conceptualize and clarify the concepts embedded in a hypothesis or research question with

More information

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis EFSA/EBTC Colloquium, 25 October 2017 Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis Julian Higgins University of Bristol 1 Introduction to concepts Standard

More information

Learning to Use Episodic Memory

Learning to Use Episodic Memory Learning to Use Episodic Memory Nicholas A. Gorski (ngorski@umich.edu) John E. Laird (laird@umich.edu) Computer Science & Engineering, University of Michigan 2260 Hayward St., Ann Arbor, MI 48109 USA Abstract

More information

AC : USABILITY EVALUATION OF A PROBLEM SOLVING ENVIRONMENT FOR AUTOMATED SYSTEM INTEGRATION EDUCA- TION USING EYE-TRACKING

AC : USABILITY EVALUATION OF A PROBLEM SOLVING ENVIRONMENT FOR AUTOMATED SYSTEM INTEGRATION EDUCA- TION USING EYE-TRACKING AC 2012-4422: USABILITY EVALUATION OF A PROBLEM SOLVING ENVIRONMENT FOR AUTOMATED SYSTEM INTEGRATION EDUCA- TION USING EYE-TRACKING Punit Deotale, Texas A&M University Dr. Sheng-Jen Tony Hsieh, Texas A&M

More information

Estimating the number of components with defects post-release that showed no defects in testing

Estimating the number of components with defects post-release that showed no defects in testing SOFTWARE TESTING, VERIFICATION AND RELIABILITY Softw. Test. Verif. Reliab. 2002; 12:93 122 (DOI: 10.1002/stvr.235) Estimating the number of components with defects post-release that showed no defects in

More information

The wicked learning environment of regression toward the mean

The wicked learning environment of regression toward the mean The wicked learning environment of regression toward the mean Working paper December 2016 Robin M. Hogarth 1 & Emre Soyer 2 1 Department of Economics and Business, Universitat Pompeu Fabra, Barcelona 2

More information

Continuous/Discrete Non Parametric Bayesian Belief Nets with UNICORN and UNINET

Continuous/Discrete Non Parametric Bayesian Belief Nets with UNICORN and UNINET Continuous/Discrete Non Parametric Bayesian Belief Nets with UNICORN and UNINET R.M. Cooke 1, D. Kurowicka, A. M. Hanea, O. Morales, D. A. Ababei Department of Mathematics Delft University of Technology

More information

PROBLEMATIC USE OF (ILLEGAL) DRUGS

PROBLEMATIC USE OF (ILLEGAL) DRUGS PROBLEMATIC USE OF (ILLEGAL) DRUGS A STUDY OF THE OPERATIONALISATION OF THE CONCEPT IN A LEGAL CONTEXT SUMMARY 1. Introduction The notion of problematic drug use has been adopted in Belgian legislation

More information

VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2)

VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2) 1 VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2) Thank you for providing us with the opportunity to revise our paper. We have revised the manuscript according to the editors and

More information

Systems Theory: Should Information Researchers Even Care?

Systems Theory: Should Information Researchers Even Care? Association for Information Systems AIS Electronic Library (AISeL) SAIS 2016 Proceedings Southern (SAIS) 2016 Systems Theory: Should Information Researchers Even Care? Kane J. Smith Virginia Commonwealth

More information

Chapter 8. Learning Objectives 9/10/2012. Research Principles and Evidence Based Practice

Chapter 8. Learning Objectives 9/10/2012. Research Principles and Evidence Based Practice 1 Chapter 8 Research Principles and Evidence Based Practice 2 Learning Objectives Explain the importance of EMS research. Distinguish between types of EMS research. Outline 10 steps to perform research

More information

AI: Intelligent Agents. Chapter 2

AI: Intelligent Agents. Chapter 2 AI: Intelligent Agents Chapter 2 Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Agents An agent is anything

More information

Underlying Theory & Basic Issues

Underlying Theory & Basic Issues Underlying Theory & Basic Issues Dewayne E Perry ENS 623 Perry@ece.utexas.edu 1 All Too True 2 Validity In software engineering, we worry about various issues: E-Type systems: Usefulness is it doing what

More information

What should we look at and what should we learn from? A Resilience Engineering / Safety-II perspective

What should we look at and what should we learn from? A Resilience Engineering / Safety-II perspective What should we look at and what should we learn from? A Resilience Engineering / Safety-II perspective Erik Hollnagel Professor University of Southern Denmark Chief Consultant Center for Quality, RSD (DK)

More information

CFSD 21 st Century Learning Rubric Skill: Critical & Creative Thinking

CFSD 21 st Century Learning Rubric Skill: Critical & Creative Thinking Comparing Selects items that are inappropriate to the basic objective of the comparison. Selects characteristics that are trivial or do not address the basic objective of the comparison. Selects characteristics

More information

Autonomy as a Positive Value Some Conceptual Prerequisites Niklas Juth Dept. of Philosophy, Göteborg University

Autonomy as a Positive Value Some Conceptual Prerequisites Niklas Juth Dept. of Philosophy, Göteborg University Philosophical Communications, Web Series, No. 28 Dept. of Philosophy, Göteborg University, Sweden ISSN 1652-0459 Autonomy as a Positive Value Some Conceptual Prerequisites Niklas Juth Dept. of Philosophy,

More information

The Power to Change Your Life: Ten Keys to Resilient Living Robert Brooks, Ph.D.

The Power to Change Your Life: Ten Keys to Resilient Living Robert Brooks, Ph.D. The Power to Change Your Life: Ten Keys to Resilient Living Robert Brooks, Ph.D. The latest book I co-authored with my colleague Dr. Sam Goldstein was recently released. In contrast to our previous works

More information

DECISION ANALYSIS WITH BAYESIAN NETWORKS

DECISION ANALYSIS WITH BAYESIAN NETWORKS RISK ASSESSMENT AND DECISION ANALYSIS WITH BAYESIAN NETWORKS NORMAN FENTON MARTIN NEIL CRC Press Taylor & Francis Croup Boca Raton London NewYork CRC Press is an imprint of the Taylor Si Francis an Croup,

More information

ACCTG 533, Section 1: Module 2: Causality and Measurement: Lecture 1: Performance Measurement and Causality

ACCTG 533, Section 1: Module 2: Causality and Measurement: Lecture 1: Performance Measurement and Causality ACCTG 533, Section 1: Module 2: Causality and Measurement: Lecture 1: Performance Measurement and Causality Performance Measurement and Causality Performance Measurement and Causality: We ended before

More information

CSC2130: Empirical Research Methods for Software Engineering

CSC2130: Empirical Research Methods for Software Engineering CSC2130: Empirical Research Methods for Software Engineering Steve Easterbrook sme@cs.toronto.edu www.cs.toronto.edu/~sme/csc2130/ 2004-5 Steve Easterbrook. This presentation is available free for non-commercial

More information

Defining and Evaluating Resilience: A Performability Perspective

Defining and Evaluating Resilience: A Performability Perspective Defining and Evaluating Resilience: A Performability Perspective John F. Meyer jfm@umich.edu PMCCS-9 Eger, Hungary September 17,2009 Outline Background Contemporary definitions of resilience Safety systems

More information

Ability to link signs/symptoms of current patient to previous clinical encounters; allows filtering of info to produce broad. differential.

Ability to link signs/symptoms of current patient to previous clinical encounters; allows filtering of info to produce broad. differential. Patient Care Novice Advanced Information gathering Organization of responsibilities Transfer of Care Physical Examination Decision Making Development and execution of plans Gathers too much/little info;

More information

Do Human Science. Yutaka Saeki

Do Human Science. Yutaka Saeki Do Human Science Yutaka Saeki 1 Changing Psychology Into Science Watson, J. B. Behaviorism (1912) The purpose of psychology is to predict and control the behavior and psychology is a part of science that

More information

Optical Illusions 4/5. Optical Illusions 2/5. Optical Illusions 5/5 Optical Illusions 1/5. Reading. Reading. Fang Chen Spring 2004

Optical Illusions 4/5. Optical Illusions 2/5. Optical Illusions 5/5 Optical Illusions 1/5. Reading. Reading. Fang Chen Spring 2004 Optical Illusions 2/5 Optical Illusions 4/5 the Ponzo illusion the Muller Lyer illusion Optical Illusions 5/5 Optical Illusions 1/5 Mauritz Cornelis Escher Dutch 1898 1972 Graphical designer World s first

More information

The Associability Theory vs. The Strategic Re-Coding Theory: The Reverse Transfer Along a Continuum Effect in Human Discrimination Learning

The Associability Theory vs. The Strategic Re-Coding Theory: The Reverse Transfer Along a Continuum Effect in Human Discrimination Learning The Associability Theory vs. The Strategic Re-Coding Theory: The Reverse Transfer Along a Continuum Effect in Human Discrimination Learning Mizue Tachi (MT334@hermes.cam.ac.uk) R.P. McLaren (RPM31@hermes.cam.ac.uk)

More information

PDF created with FinePrint pdffactory Pro trial version

PDF created with FinePrint pdffactory Pro trial version Pilot Fatigue Pilot Fatigue Source: Aerospace Medical Association By Dr. Samuel Strauss Fatigue and flight operations Fatigue is a threat to aviation safety because of the impairments in alertness and

More information

Cognitive Changes Workshop Outcomes

Cognitive Changes Workshop Outcomes HO 4.1 Cognitive Changes Workshop Outcomes At the end of this session, participants should be able to: define Neuropsychology and the role of the Neuropsychologist (optional) recognise normal difficulties

More information

Sociology 3308: Sociology of Emotions. Prof. J. S. Kenney. Overheads Class 5-6: The Psychology of Emotions:

Sociology 3308: Sociology of Emotions. Prof. J. S. Kenney. Overheads Class 5-6: The Psychology of Emotions: Sociology 3308: Sociology of Emotions Prof. J. S. Kenney Overheads Class 5-6: The Psychology of Emotions: Perennial problems in the study of emotions include: 1. How we define the task of the psychology

More information

Time-sampling research in Health Psychology: Potential contributions and new trends

Time-sampling research in Health Psychology: Potential contributions and new trends original article Time-sampling research in Health Psychology: Potential contributions and new trends Loni Slade & Retrospective self-reports are Christiane A. the primary tool used to Hoppmann investigate

More information

A Process Tracing Approach to the Investigation of Situated Cognition

A Process Tracing Approach to the Investigation of Situated Cognition PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 48th ANNUAL MEETING 2004 658 A Process Tracing Approach to the Investigation of Situated Cognition Lawrence G. Shattuck United States Military Academy

More information

Bruce Thomadsen. University of Wisconsin - Madison

Bruce Thomadsen. University of Wisconsin - Madison Root Cause Analysis Bruce Thomadsen University of Wisconsin - Madison Analysis Assume you have an event. What to do? The analysis would like to get to the root cause. RCA Team First, assemble an root-cause

More information

Neuroscience and Generalized Empirical Method Go Three Rounds

Neuroscience and Generalized Empirical Method Go Three Rounds Bruce Anderson, Neuroscience and Generalized Empirical Method Go Three Rounds: Review of Robert Henman s Global Collaboration: Neuroscience as Paradigmatic Journal of Macrodynamic Analysis 9 (2016): 74-78.

More information

Observational Coding Assignment Instructions for doing the assignment

Observational Coding Assignment Instructions for doing the assignment Instructions for doing the assignment Following this page you will find a description of the assignment on observation coding. To do this assignment you will need to click on the link provided on the intranet

More information

Framework for Comparative Research on Relational Information Displays

Framework for Comparative Research on Relational Information Displays Framework for Comparative Research on Relational Information Displays Sung Park and Richard Catrambone 2 School of Psychology & Graphics, Visualization, and Usability Center (GVU) Georgia Institute of

More information

Extrinsic Risk Factors Inappropriate Coaching Or Instruction

Extrinsic Risk Factors Inappropriate Coaching Or Instruction Extrinsic Risk Factors Inappropriate Coaching Or Instruction Personal Reasons for Coaching and Personal Philosophy Risk Management be responsible for themselves and their abilities, not for all the other

More information

IMPROVING PROBLEM SOLVING EFFICIENCY: THE WHAT AND HOW OF CAUTION

IMPROVING PROBLEM SOLVING EFFICIENCY: THE WHAT AND HOW OF CAUTION IMPROVING PROBLEM SOLVING EFFICIENCY: THE WHAT AND HOW OF CAUTION By MARTIN E. KNOWLES A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

More information

Behaviorism: An essential survival tool for practitioners in autism

Behaviorism: An essential survival tool for practitioners in autism Behaviorism: An essential survival tool for practitioners in autism What we re going to do today 1. Review the role of radical behaviorism (RB) James M. Johnston, Ph.D., BCBA-D National Autism Conference

More information

Incorporating Experimental Research Designs in Business Communication Research

Incorporating Experimental Research Designs in Business Communication Research Incorporating Experimental Research Designs in Business Communication Research Chris Lam, Matt Bauer Illinois Institute of Technology The authors would like to acknowledge Dr. Frank Parker for his help

More information

Student Performance Q&A:

Student Performance Q&A: Student Performance Q&A: 2009 AP Statistics Free-Response Questions The following comments on the 2009 free-response questions for AP Statistics were written by the Chief Reader, Christine Franklin of

More information

Why do Psychologists Perform Research?

Why do Psychologists Perform Research? PSY 102 1 PSY 102 Understanding and Thinking Critically About Psychological Research Thinking critically about research means knowing the right questions to ask to assess the validity or accuracy of a

More information

Expert Focus Group on Driver Distraction:

Expert Focus Group on Driver Distraction: Expert Focus Group on Driver Distraction: Definition and Research Needs US EU Bilateral ITS Technical Task Force April 28, 2010 Berlin, Germany Disclaimers The views expressed in this publication are those

More information

The Objects of Social Sciences: Idea, Action, and Outcome [From Ontology to Epistemology and Methodology]

The Objects of Social Sciences: Idea, Action, and Outcome [From Ontology to Epistemology and Methodology] The Objects of Social Sciences: Idea, Action, and Outcome [From Ontology to Epistemology and Methodology] Shiping Tang Fudan University Shanghai 2013-05-07/Beijing, 2013-09-09 Copyright@ Shiping Tang Outline

More information

Who? What? What do you want to know? What scope of the product will you evaluate?

Who? What? What do you want to know? What scope of the product will you evaluate? Usability Evaluation Why? Organizational perspective: To make a better product Is it usable and useful? Does it improve productivity? Reduce development and support costs Designer & developer perspective:

More information

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Congruency Effects with Dynamic Auditory Stimuli: Design Implications Congruency Effects with Dynamic Auditory Stimuli: Design Implications Bruce N. Walker and Addie Ehrenstein Psychology Department Rice University 6100 Main Street Houston, TX 77005-1892 USA +1 (713) 527-8101

More information

Augmented Cognition: Allocation of Attention

Augmented Cognition: Allocation of Attention Augmented Cognition: Allocation of Attention Misha Pavel 1, Guoping Wang, Kehai Li Oregon Health and Science University OGI School of Science and Engineering 20000 NW Walker Road Beaverton, OR 97006, USA

More information

Is Leisure Theory Needed For Leisure Studies?

Is Leisure Theory Needed For Leisure Studies? Journal of Leisure Research Copyright 2000 2000, Vol. 32, No. 1, pp. 138-142 National Recreation and Park Association Is Leisure Theory Needed For Leisure Studies? KEYWORDS: Mark S. Searle College of Human

More information

Flightfax R Online newsletter of Army aircraft mishap prevention information

Flightfax R Online newsletter of Army aircraft mishap prevention information Number 32 December 2013 Flightfax R Online newsletter of Army aircraft mishap prevention information In this month s issue of Flightfax, we are focusing on individual and aircrew situational awareness

More information

RECOMMENDATIONS OF FORENSIC SCIENCE COMMITTEE

RECOMMENDATIONS OF FORENSIC SCIENCE COMMITTEE To promote the development of forensic science into a mature field of multidisciplinary research and practice, founded on the systematic collection and analysis of relevant data, Congress should establish

More information

Systems Engineering Guide for Systems of Systems. Summary. December 2010

Systems Engineering Guide for Systems of Systems. Summary. December 2010 DEPARTMENT OF DEFENSE Systems Engineering Guide for Systems of Systems Summary December 2010 Director of Systems Engineering Office of the Director, Defense Research and Engineering Washington, D.C. This

More information

Training Strategies to Mitigate Expectancy-Induced Response Bias in Combat Identification: A Research Agenda

Training Strategies to Mitigate Expectancy-Induced Response Bias in Combat Identification: A Research Agenda Human Factors in Combat ID Workshop Training Strategies to Mitigate Expectancy-Induced Response Bias in Combat Identification: A Research Agenda Frank L. Greitzer Pacific Northwest National Laboratory

More information

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives. ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 16 comp329-2013-parsons-lect16 2/44 Today We will start on the second part of the course Autonomous

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

COURSE: NURSING RESEARCH CHAPTER I: INTRODUCTION

COURSE: NURSING RESEARCH CHAPTER I: INTRODUCTION COURSE: NURSING RESEARCH CHAPTER I: INTRODUCTION 1. TERMINOLOGY 1.1 Research Research is a systematic enquiry about a particular situation for a certain truth. That is: i. It is a search for knowledge

More information

CHAPTER 2: PERCEPTION, SELF, AND COMMUNICATION

CHAPTER 2: PERCEPTION, SELF, AND COMMUNICATION Communication Age Connecting and Engaging 2nd Edition Edwards Solutions Manual Full Download: https://testbanklive.com/download/communication-age-connecting-and-engaging-2nd-edition-edwards-solu THE COMMUNICATION

More information

Shiftwork, sleep, fatigue and time of day: studies of a change from 8-h to 12-h shifts and single vehicle accidents

Shiftwork, sleep, fatigue and time of day: studies of a change from 8-h to 12-h shifts and single vehicle accidents University of Wollongong Thesis Collections University of Wollongong Thesis Collection University of Wollongong Year 1999 Shiftwork, sleep, fatigue and time of day: studies of a change from 8-h to 12-h

More information

EMOTIONAL INTELLIGENCE QUESTIONNAIRE

EMOTIONAL INTELLIGENCE QUESTIONNAIRE EMOTIONAL INTELLIGENCE QUESTIONNAIRE Personal Report JOHN SMITH 2017 MySkillsProfile. All rights reserved. Introduction The EIQ16 measures aspects of your emotional intelligence by asking you questions

More information

Coping with Threats of Terrorism A Protocol for Group Intervention by Richard J. Ottenstein, Ph.D., CEAP, CTS

Coping with Threats of Terrorism A Protocol for Group Intervention by Richard J. Ottenstein, Ph.D., CEAP, CTS Journal Submission: International Journal of Emergency Mental Health Published: Volume 5, Number 1, Winter 2003 Submitted by: Richard J. Ottenstein, Ph.D., CEAP, CTS Address: The Workplace Trauma Center

More information

We Can Test the Experience Machine. Response to Basil SMITH Can We Test the Experience Machine? Ethical Perspectives 18 (2011):

We Can Test the Experience Machine. Response to Basil SMITH Can We Test the Experience Machine? Ethical Perspectives 18 (2011): We Can Test the Experience Machine Response to Basil SMITH Can We Test the Experience Machine? Ethical Perspectives 18 (2011): 29-51. In his provocative Can We Test the Experience Machine?, Basil Smith

More information

Use of Structure Mapping Theory for Complex Systems

Use of Structure Mapping Theory for Complex Systems Gentner, D., & Schumacher, R. M. (1986). Use of structure-mapping theory for complex systems. Presented at the Panel on Mental Models and Complex Systems, IEEE International Conference on Systems, Man

More information

VOLUME B. Elements of Psychological Treatment

VOLUME B. Elements of Psychological Treatment VOLUME B Elements of Psychological Treatment Module 2 Motivating clients for treatment and addressing resistance Approaches to change Principles of Motivational Interviewing How to use motivational skills

More information

Motivational Interviewing for Family Planning Providers. Motivational Interviewing. Disclosure

Motivational Interviewing for Family Planning Providers. Motivational Interviewing. Disclosure for Family Planning Providers Developed By: Disclosure I I have no real or perceived vested interests that relate to this presentation nor do I have any relationships with pharmaceutical companies, biomedical

More information

THIS IS A SAMPLE OF THE ELECTRONIC FORM THAT IS MADE AVAILABLE TO AGENCY FIELD INSTRUCTORS EACH TERM

THIS IS A SAMPLE OF THE ELECTRONIC FORM THAT IS MADE AVAILABLE TO AGENCY FIELD INSTRUCTORS EACH TERM THIS IS A SAMPLE OF THE ELECTRONIC FORM THAT IS MADE AVAILABLE TO AGENCY FIELD INSTRUCTORS EACH TERM Field Evaluation Assessment of Student by Field Instructor (to be completed by field instructor/supervisor)

More information

Mapping A Pathway For Embedding A Strengths-Based Approach In Public Health. By Resiliency Initiatives and Ontario Public Health

Mapping A Pathway For Embedding A Strengths-Based Approach In Public Health. By Resiliency Initiatives and Ontario Public Health + Mapping A Pathway For Embedding A Strengths-Based Approach In Public Health By Resiliency Initiatives and Ontario Public Health + Presentation Outline Introduction The Need for a Paradigm Shift Literature

More information

EXPERIMENTAL DESIGN Page 1 of 11. relationships between certain events in the environment and the occurrence of particular

EXPERIMENTAL DESIGN Page 1 of 11. relationships between certain events in the environment and the occurrence of particular EXPERIMENTAL DESIGN Page 1 of 11 I. Introduction to Experimentation 1. The experiment is the primary means by which we are able to establish cause-effect relationships between certain events in the environment

More information

HOW TO IDENTIFY A RESEARCH QUESTION? How to Extract a Question from a Topic that Interests You?

HOW TO IDENTIFY A RESEARCH QUESTION? How to Extract a Question from a Topic that Interests You? Stefan Götze, M.A., M.Sc. LMU HOW TO IDENTIFY A RESEARCH QUESTION? I How to Extract a Question from a Topic that Interests You? I assume you currently have only a vague notion about the content of your

More information

HUMAN ERROR: CURRENT PERSPECTIVES AND IMPLICATIONS FOR ASSESSING BLAME

HUMAN ERROR: CURRENT PERSPECTIVES AND IMPLICATIONS FOR ASSESSING BLAME HUMAN ERROR: CURRENT PERSPECTIVES AND IMPLICATIONS FOR ASSESSING BLAME Kenneth R. Laughery Rice University Houston, Texas, USA 1 HUMAN ERROR: CURRENT PERSPECTIVES AND IMPLICATIONS FOR ASSESSING BLAME Kenneth

More information