By: Francois Aubin.

B29

The story of human factors engineering begins in 1949, with a pioneering researcher named Alphonse Chapanis. Tasked with investigating a troubling trend—the high number of accidents involving B-29 aircraft—Chapanis uncovered a critical insight that would forever change how we design systems. At the time, the United States alone experienced about 30 incidents per year, a staggering number that demanded urgent attention. Determined to uncover the root cause, Chapanis decided to observe pilots during takeoff and landing. What he discovered was both simple and profound.

The Problem: Design Flaws Leading to Human Error

During his observations, Chapanis noticed a critical design flaw: the landing gear and flap controls were placed side by side, with identical shapes. Both controls were used frequently during critical phases of flight, such as landing and climbing. This design created a high likelihood of pilots selecting the wrong control, especially under pressure. The consequences were often catastrophic, leading to accidents that could have been avoided.

This wasn’t a failure of the pilots—it was a failure of design. Chapanis realized that the system wasn’t accounting for human limitations. Instead of blaming the operators, he focused on fixing the system itself.

The Fix: Designing for Human Limitations

The solution was elegant yet revolutionary. Chapanis redesigned the controls, giving the landing gear lever a wheel-like shape and the flap control a wing-like shape. This small but significant change made it nearly impossible for pilots to confuse the two. The result? A dramatic reduction in accidents, from 30 incidents per year to just a handful.

This breakthrough marked the birth of human factors engineering—a field dedicated to designing systems that account for human limitations and prevent errors. It was no longer about expecting humans to adapt to flawed systems; it was about designing systems that adapted to humans.

The Birth of Human Factors Engineering

From this breakthrough, researchers began to study the full spectrum of human limitations: attention, memory, decision-making, and more. They realized that by understanding these constraints, they could design systems that not only prevented errors but also enhanced human performance. This led to the development of cognitive engineering, a discipline focused on aligning system design with human cognitive processes.

The goal was clear: design systems that work with human nature, not against it.

 

Humans as Neural Networks: A Fascinating Parallel

Humans, much like neural networks, are complex and sometimes unpredictable. Consider this: if four people witness a car accident, you’ll likely hear four different accounts of what happened. This phenomenon, known as false memory, highlights the fallibility of human perception and cognition. Similarly, neural networks—like humans—can experience hallucinations, memory limitations, and attention lapses. Both systems are, in essence, “black boxes” with inherent limitations.

This parallel between humans and AI is more than just a metaphor—it’s a framework for understanding how to design better systems for both.

Managing Human Error: Lessons from Nuclear Power Plants

The principles of human factors engineering have been applied in high-stakes environments like nuclear power plants. For example, when operators need to transfer power from one grid to another, they create detailed plans. However, instead of relying on a single operator, multiple teams independently develop plans. If the plans match, the solution is likely correct. If they differ, it signals a potential error.

This approach mirrors how we can manage AI systems: by cross-verifying outputs and ensuring consistency. It’s a powerful reminder that redundancy and collaboration are key to preventing errors, whether in human or machine systems.

Managing errore in plna

AI and Human Factors: A Powerful Combination

At Cognitive Group, we’ve combined decades of expertise in human factors engineering with cutting-edge AI technology. Just as we’ve learned to manage human error, we now apply these principles to prevent AI errors and hallucinations. For instance, we use techniques like cross-checking calculations, comparing diagnoses across multiple regions, and ensuring consistent data formats to enhance the reliability of AI systems.

By treating AI systems as we would human operators—with an understanding of their limitations—we can design more robust and trustworthy technologies.

A Real-World Example: Financial Statements

One recent application involved financial statements. Banks often struggle with inconsistent formats when calculating assets, liabilities, and profits. Even when the labels are similar, the underlying data can vary significantly between companies. To address this, we developed an AI system that standardizes financial statements into a consistent format. We then added a second layer of AI to cross-check the calculations, ensuring accuracy and reliability.

This approach not only saves time but also reduces the risk of costly errors—proving that the principles of human factors engineering are just as relevant in the age of AI.

The Future: Bridging AI and Human Factors

The intersection of AI and human factors engineering holds immense potential. By understanding the parallels between human cognition and neural networks, we can design systems that are not only intelligent but also resilient to errors. Whether it’s preventing pilot mistakes in aviation or ensuring the accuracy of financial calculations, the lessons of human factors engineering continue to shape the future of technology.

As we move forward, the key will be to design with empathy—for both humans and machines. After all, the best systems are those that understand and adapt to the limitations of their users, whether they’re made of flesh or code.