Welcome to my website. This is where I write about what interests me. Enjoy your visit!
While it is adaptive for humans to distort reliability and view events with cognitive biases, it isn’t for HRO. This post discusses reasons why people and organizations avoid accountability, the role accountability plays in HRO, and why it is essential for being highly reliable.
This post and the next focus on the last of the HRO principles that Weick and Sutcliffe missed in their canonical description, accountability. In this post, I discuss what accountability is and the precursors necessary to make it effective.
In this second of two posts on assessment, I argue that assessment contributes to resilience by identifying problems with procedural compliance, records, and performance. I identify the components of HRO assessments. Assessments are a ruthless test of the adequacy of a management system and its design assumptions.
BLUF: Assessment is necessary for learning if a system or process is functioning the way you expect, including its management. Audits evaluate the compliance to procedures, effectiveness of training and qualification, and diligence regarding maintenance of safety-critical systems. In the first of two posts on assessment, I address what it is and why it is necessary for High Reliability Organizing.
BLUF: After cheekily declaring that the canonical five principles of High Reliability Organizing articulated by Weick and Sutcliffe (2007) are insufficient, I assert that three additional principles are essential for HRO. The first of these is training based on rigorous standards for technical knowledge and performance.
BLUF: In their hallmark book, Managing the Unexpected (2007), Weick and Sutcliffe distilled five principles of High Reliability Organizations. I contend that these principles are necessary, but not sufficient for understanding High Reliability Organizing. In this post, I explain why more principles are necessary, the role of management in HRO, and articulate four criteria that additional principles must meet.
Questioning attitude is a skill to constantly collect data and a decision and speak up when it doesn’t make sense. There are strong, unconscious social and psychological pressures to conform to the majority view that must be overcome for questioning attitude to improve resilience and decision making. Questioning attitude has both technical and social bases.
A questioning attitude is essential for mindfulness and resilience. It can be expressed in the form of dissent for enhancing safety and making better decisions. This post defines questioning attitude and argues that it makes accessible divergent thinking, enhances ability to “listen” to weak signals of danger, and has much in common with Weick’s seven traits of sensemaking.
Blind spots emerge in the course of doing work. Blind spots are function of both the situation and the observer. You can’t search for them, but you can do things to make encountering them more likely such as create reporting systems and conduct independent audits. You can’t manage blind spots, but you can learn to improve your how your workers and managers react when they discover them.
BLUF: This post uses the Challenger launch decision of January 1986 as a case study in blind spots. It has been analyzed in many books and articles, but my take is unique because of its focus on blind spots.
This post is a return to important HRO concepts. Organizational blind spots are a fact of life. They are safety vulnerabilities that can exist for long periods below an organization’s awareness. Blind spots are a risk for high reliability because an organization doesn’t know where to look for them and they don’t look like problems when encountered.
The complexity and risks involved in safety-critical work are managed with role systems. The work is divided among groups or separate organizations with defined roles. Each is important to the mission of the organization. A role consists of defined behaviors and responsibilities required of people because of their position in the organization.
In this post, I summarize the main points of Charles Perrow’s Normal Accident Theory (NAT). I consider NAT to be the main counterpoint to HRO.
This post provides an overview of High Reliability Organizing (HRO) and the research on it. I distinguish high reliability organizing, the principles and practices that yield superb performance, from high reliability organizations, collections of people working to create high reliability. I focus on the principles and practices of organizing to achieve high reliability (High Reliability Organizing) what the people in the organizations DO and WHY rather than on the organizations themselves. This blog is about the organizational design practices for active management to reduce failure and increase the reliability of important outcomes.
This post is an introduction to my writings on High Reliability Organizing (HRO). I have a different perspective than others because I am both an organizational scholar and have decades of experience as an HRO practitioner. The blog is a way for me to explore those different ideas and share them with others.