HUMAN FACTORS Part IV – The Science of Human Factors

In the last of this series on Human Factors, Captain Bob Henderson discusses a new way of looking at safety when it comes to the human element of a process or event that could or does have an adverse outcome - Safety-II. Essentially - “What went right?” Rather than - “What went wrong?”. A huge change in approaching an understanding of an event to be able to make the future event safer.

Black Swan events (absolutely unlikely scenarios that actually do happen and with great impact) will continue to occur - humans lack the ability to see into the future. We can’t ensure against all possible variables and circumstances when it comes to adverse events - that’s why insurance exists. But when lives and multi million dollar equipment can be destroyed, we try our best to protect against the possibility.

Two aviation events come into mind when I think of the Safety-II way of looking at things - especially from the human factor point of view.

The first is Azerbaijan Airline Flight 8243. This crash, on Christmas Day 2024, made me weep as I thought of the extraordinary effort that the tech crew went to, to make it to safety; and the prolonged distress of everyone else on board as they could only sit and await their doom.

The rear fuselage of Azerbaijan Airlines Flight 8243 on the shore of the Caspian Sea.

For an hour and fifteen minutes after being hit by a Russian missile, the pilots nursed the severely disabled aircraft across the Caspian Sea and managed to get the aircraft to within cooee of the airport they hoped to land at. Of the 67 people on board, 38 people lost their lives - including those heroic pilots and a flight attendant.

Kudos is also given to Embraer for creating such a sturdy passenger aircraft that survived a missile strike. But we can only imagine how supremely the tech crew worked together, and with the aircraft, to stay aloft for such a long time and cover so much distance to get the best outcome they could hope for. It’s such a tragedy that in the final moments (only 3 kms from the runway) that it all fell apart (literally and figuratively). Nonetheless, 29 people survived. Injured, but alive to live another day.

The second event was US Airways Flight 1549 on 15th January 2009. The event is more commonly known as Sully - Miracle on the Hudson, thanks to the 2016 movie of the same name. Because the event is so well known, I won’t go into too much detail here. But it is worth mentioning that total flight time of this flight was only 4 minutes. Captain Chelsey Sullenburger and First Officer Jeffrey Skiles managed to make decisions and successfully ditch the aircraft so that not a single fatality occurred among the 115 people of board.

Miracle on the Hudson

In both the above events - if the crew hadn’t worked well, either together or individually, it is a reasonable assumption that both crashes would have killed everyone on board.

With these thoughts in mind, I’ll now pass you over to Captain Bob .


Part III we walked through Professors Reason’s swiss cheese and found ourselves balanced on the on the final slice, at the last line of defence, hoping that our CRM skills would be sufficient to avoid an incident or accident.

The LOSA Collaborative and the development of TEM provided a structured way to look at the risk factors associated with an activity while acknowledging that humans will make errors.

So, the thrust of the development of the science of human factors has been to understand the situations where the humans will make errors and to then strengthen the system to support the human operators rather than try and make the human adapt to an inappropriate system design.

Two separate pathways lead on from the TEM concepts. One has been established by Professor Erik Hollnagel, the other by David Marx. Erik holds a PhD in Psychology; David has a BSc in Engineering and is a Juris Doctor in Law. Both have developed concepts and practices that build on the concepts of human error and the reasons that humans may make errors.

Hollnagel’s work arose from concerns that, for all the good work on human factors in the science, accidents still occurred and that a huge amount of effort is being expended in unpicking the accident trajectory and understanding “why”. Hollnagel calls this backward looking, hindsight view of an accident Safety-I.  

With modern aircraft, engineering capability and simulators being able to expose pilots to likely and possible scenarios, the probability of a repetition of any particular event is low and Hollnagel, while agreeing that the investigations are necessary, also asserts that the next accident is unlikely to mimic a previous one.

Some statistics might help put this is perspective. In 2023, there were some 36 million commercial flights. For jet aircraft, there were zero hull loss accidents and fatal accidents for jet aircraft compared to 0.24 accidents per million sectors in 2022. For turboprop aircraft, the hull loss accident rate declined from 1.76 per million sectors in 2022 to 0.57 accidents per million sectors[1].

Hollnagel argues, therefore, that the rate of technological development means that safety concerns must address systems that are larger and more complicated than the systems of yesteryear. Consider in this regard the modern fly-by-wire, FMS-controlled passenger fleets with technologically advanced, efficient high-bypass computer managed turbine engines, epitomised by the Boeing 787 and Airbus A350. There are tight couplings between functions and systems may change faster than they can be described meaning some modes of operation may be incompletely known. The net result is that many systems are underspecified or intractable and it is not possible to prescribe tasks and actions, especially emergency actions, in every detail.

Hollnagel argues that we need to understand why and how something went right. Safety-II is, therefore, a condition where the number of successful outcomes is as high as possible. It is the ability to succeed under varying conditions and is achieved by trying to make sure that things go right, rather than by preventing them from going wrong.

The key to a productive Safety-II programme is that participants in any scenario or action identify and report “why” and “how” they managed to achieve a positive outcome in the circumstances. Think of this as the opposite of reporting an incident; you are reporting instead a good outcome.

The subject of reporting then leads us to consider the work of David Marx. His seminal text, Whack-a-Mole: The Price We Pay For Expecting Perfection, focuses on society’s tendency to demonize people who make mistakes that cause harm, while at the same time taking a “no harm, no foul” approach to reckless behaviours that cause no undesirable outcome.

“It’s YOUR fault!”… Yeah nah man, not cool. That’s not ‘Just Culture’

Marx built in earlier work by exponents such as James Reason to develop Just culture. This globally accepted concept is related to systems thinking which emphasizes that mistakes are generally a product of faulty organizational cultures, rather than solely brought about by the person or persons directly involved.

In a just culture, after an incident, the question asked is, "What went wrong?" rather than "Who caused the problem?". A just culture is not the same as a no-blame culture as individuals may still be held accountable for their misconduct or negligence.

A just culture helps create an environment where individuals feel free to report errors and help the organization to learn from mistakes. This is in contrast to a "blame culture" where individual persons are fired, fined, or otherwise punished for making mistakes, but where the root causes leading to the error are not investigated and corrected. In a blame culture mistakes may be not reported but rather hidden, leading ultimately to diminished organizational outcomes.

In a system of just culture, discipline is linked to inappropriate behaviour, rather than harm. This allows for individual accountability and promotes a learning organization culture.

Safety-II relies on participants in the system reporting openly and honestly. Honest human mistakes (think of the definitions from James Reason) must be seen as a learning opportunity for the organization and its employees.


A summary of this series of blogs is that humans will make errors (slips, lapse and mistakes) and these are opportunities for learning and understanding why things went right (Safety-II) as long as they are able to be identified and reported in an environment that seeks growth not blame (just culture).



[1] IATA Accident data 2023 - Search

[2] Hollnagel, E. Safety-I and Safety-II The Past and Future of safety Management. Ashgate, Surrey, 2014, pg 147





Previous
Previous

ZERO FLIGHT TIME (ZFT) FLIGHT SIMULATORS Arthur Gatland

Next
Next

HUMAN FACTORS Part III – The Evolution of Threat and Error Management