HRO 15c: HRO Principle 7: Assessment Pt2
Introduction
In the previous post on assessments, I described what they are and why they are an important means for combatting design and organizational drift, the "slow uncoupling of local practice from written procedure" (Snook, 2002, p.225). Weaknesses and gaps in organizational processes can manifest themselves anytime. It is better to identify and correct these problems before operating in situations where the consequences of reliability gaps are much higher.
In this second part of the discussion of assessment and its contribution to High Reliability Organizing, I argue that the identification of surprises and problems through assessment is an essential part of the resilience that HRO seeks to cultivate. I conclude with a consideration of assessment design principles. The list of references cited in this part can be found at the end of the first post on assessment.
Assessments take time and attention, but are much less costly than discovering the weaknesses when the consequences for failure are high.
Resilience
Drift detection and management for reliability improve resilience (Pettersen & Schulman, 2019) in several ways. First, systems and personal performance are constantly fluctuating within the boundaries of safe operations even when there are no bad outcomes. These fluctuations are not easy to observe given all the other things members of organizations must attend to. Assessments make adaptation and resilience processes within an organization visible. Where is resilience located in the processes? How are the processes operating? What’s missing? The insight possible from process and performance audits creates opportunities for intervention to cope with declines in resilience caused by complexity.
Second, safety comes from both design and discovery (Wildavsky, 2017). Since designers cannot reasonably foresee all possible operational contexts, organizations seeking higher reliability are continually searching for and sometimes experimenting with ways to enhance it. It is a misconception that HRO includes no space for trial-and-error learning (Bierly and Spender, 1995). Assessments are a form of trial-and-error experimentation that leads to a deeper understanding of reliability-enhancing behaviors, attitudes, and processes. ““The more people that participate in this exploration, the more widespread the search, the more varied the probes and reactions, the more diverse the evaluating minds, positive and negative, the better” (Wildavsky, 2017, p.245).
Finally, assessment leads to resilience through professional development (Pettersen & Schulman, 2019). First, people performing assessments improve their skills of observation by noting the performance of individuals as well as teams and how they communicate, coordinate complex tasks, and back each other up (or don’t). Second, when required to connect perceived deficiencies to technical requirements, assessments can improve an assessor’s ability to reason from requirements to actions and the reverse. This can improve their sensemaking skills when they return to their normal job. They can witness how failures to perform HRO practices well can show up in operations and records. Third, people being evaluated or reading the assessment reports conducted at other organizations can learn that maladaptations of procedures at a local level can undermine resilience more globally.
An important form of assessment seldom considered by academics is the formal report of an incident or close call. Criteria for what are considered reportable events vary across organizations and over time within organizations. The general idea of designating an undesired outcome as a sentinel event worthy of deeper investigation is that the criteria establish a baseline for acceptable performance. A sentinel or reportable event can be a significant departure from procedures, requirements, or performance with actual or potentially serious consequences for the organization, the community, or the industry. Because of the seriousness attached to the event, an organization must investigate it and issue a formal report of its investigation to document what happened and what it learned. An incident report is at once a self-assessment of the problem and a vehicle for external assessment because supervisory organizations and regulators can evaluate the effectiveness of the investigation and corrective action. This form of assessment and learning leads to greater organizational resilience and improved communications (sometimes painful) with regulatory organizations.
The importance and value of learning quickly from exploration and error are significantly underestimated by people that don’t have experience with HRO. People in such organizations seldom admit to regulators or outsiders (e.g. academics observing them) that they regularly make mistakes, analyze them, and adjust their practices, possibly because it doesn’t sound reliable even to themselves. Yet it is crucial. Being more resilient doesn’t come from memorizing steam tables or procedures, its source is discovering and learning from failure faster. Every practice in HRO is open to question and revision. This is as true for organizations engaged in HRO as it is in the Toyota Production System, which is one of the characteristics that makes copying what they are doing now useless.
Assessment Design
All organizations have gaps in their defenses against hazards. The difference between organizations that energetically perform assessments and those that don’t is the gaps in defenses are smaller (Helmreich, Wilhelm, Klinect, & Merritt, 2000). Assessment is a review and analysis of performance, done in-process when appropriate.
There are two ways to assess performance: internal and external. Each has strengths and weaknesses. Internal assessments are easier to do (little scheduling required), can be more or less informal (e.g., findings shared orally with personnel observed or written and forwarded to higher authority), easier to target on command priorities, and can generate large volumes of data for trend analysis. The disadvantages of internal assessments is they aren’t visible outside the organization (this depends on context, civil aviation shares anonymous assessment data widely), investigators may have low expertise leading to ill-informed findings and failures to recognize problems, they are not informed by problems at other organizations doing similar work, inspector findings are subject to command pressure (people don’t like having their errors reported to the boss), can take time away from “day jobs” so they are often deferred, and results can be ignored by organizational leaders.
External assessments bring much greater rigor and formality because they employ professional inspectors or people that are experts in a particular area, typically document findings in formal reports forwarded to supervisory organizations so they can’t easily be ignored (although corrective actions can be perfunctory unless they too are assessed), are subject to less command influence, and inspectors can look for problems found at other organizations. The disadvantages of external assessments is that inspectors bring their own biases to the assessment, not always basing their findings in technical requirements (an undiscussable), external inspectors may be biased to finding low-level deficiencies to justify their existence (what is low-level is an essentially contested concept), and the detailed focus of some external inspections can impact work and managerial attention (shhh, another undiscussable).
When I led external assessments, I made it a practice to discuss my findings with the organization before briefing its leaders. Depending on the feedback received, I reworded the finding, reclassified it as an opportunity for improvement, or deleted it. I wanted to establish agreement that what my team and I identified as a deficiency was recognized as a deficiency by those responsible for managing it. My goal was to enhance the perceived legitimacy of the assessment.
There is no one way to do assessments for HRO. They can be based on a mix of theory, recent problems at other organizations, or particular headquarters goals, especially recent changes. Theories underlying the context of HRO are not always explicit, sometimes they are inferred by headquarters, regulatory agency, and industry group articulations of HRO. The benefit of using theory is that it aligns assessment design with the principles of HRO. The assessment evaluates an organization’s understanding of how to put the principles of HRO into practice.
Assessments in HRO focus on these areas:
results of training: systems, theoretical, operations knowledge
records of qualification and training
self-assessment and follow-up (for problems found)
compliance with external requirements
performance on routine and emergency operations
administrative records for key programs
organizational response to problems (the problems that find you)
material condition
The critiques of unexpected events, assessments of procedural violations, or outcomes not meeting minimum standards are a special form of self-assessment in organization. This is why external organizations frequently review the records of problem identification, corrective action, and follow up. Together they provide evidence for how an organization responds to and learns (or not) from problems. From an assessment perspective, having no problems means an organization isn’t looking.
A mere records review is not sufficient for understanding the work in HRO. Records are easy to optimize and scrub. Face-to-face performance isn’t. People can’t hide low levels of knowledge, poor training, and procedure violations while they perform work, particularly when assessors check their understanding as they do it. I’ve had surprisingly long conversations with people that strongly defended the sufficiency of checking work performance through “desk interviews.” You can conceal a lot in a desk interview. When performing the work, your ignorance is naked. I’ve also talked with people that do a lot of in-process observations of work. They have told me that the minimum length for observations in the context of general control room performance is two hours. Their rationale is that anyone can fake exemplary performance for a little while, but eventually, people revert to what they normally do. This is where all the good assessment data is.
Summary
Assessment compares organizational design and planning with what members do. Assessments seek a deeper understanding of an organization’s reliability by probing the micro-processes that support it. Formal assessment is based on HRO theory and objective criteria sourced to technical requirements, documented in writing. The documentation of assessments supports organizational transparency and is itself an opportunity for assessment. Organizational assessment for HRO is necessary because no plan or design remains unchanged under the pressure of operations. It is also necessary because the principles and practices of HRO aren’t easy to do consistently well. Neither are assessments. Assessment improves resilience through professional development as well as deficiency correction. HRO assessments are systematic, based on theory as well as standards for particular kinds of work judged to be important or that best reflects the theory in practice. Assessment is a second fundamental principle of High Reliability Organizing overlooked by Weick and Sutcliffe in their canonical list of HRO principles. Problems and deviations from requirements are inevitable. An organization that has no problems isn’t looking.