Editor’s Note – we are happy to introduce this guest article from Moin Rahman, Founder of HVHF Sciences. His bio and link to his company’s web site are located at the end of his article.
Is there a Hippocratic Oath – or something similar – for Human Factors Practitioners? At least I have not heard of one that is specific to human factors, although there is a similar oath for engineers. And there have been discussions about having an oath for scientists and engineers in general. Nevertheless, human factors professionals are driven by our morals and professional ethics to design devices and solutions that in the words of Asimov’s First Law of Robotics “[A robot] may not injure a human being or, through inaction, allow a human being to come to harm.”
Good so far. But the ethics of a human-machine system or complex sociotechnical system (STS), particularly at the intersection of humans and safety critical technology may or may not receive the necessary attention it deserves. Let me highlight this by raising the following rhetorical questions:
- What kind of safeguards should a STS possess to prevent a pilot with psychosis – and with suicidal ideation — from flying a plane?
- Should a system be designed to alert a supervisor, when a human operator at the tactical edge (e.g., security guard or soldier) is neither vigilant nor exercises the desired moral judgment that is expected of him?
- Should a machine – say, a car – blow the whistle to the system (a.k.a., infrastructure: car dealership, traffic safety enforcing agency or surrounding cars via M2M communication) that the owner has not performed the required repairs to fix the faulty brakes or is still driving around with worn out tires?
Consider the first case concerning the pilot, which did transpire recently in the real world: the tragic and intentional crashing of Germanwings Flight 9525 by the co-pilot who suffered severe depression. Findings from initial investigation suggest that the co-pilot went to great lengths to hide his mental illness from his employer and professional environment. Alternatively, one may also pose the question, was the employer lax due to throughput and financial pressures and didn’t do the due diligence required to determine the pilot’s fitness for duty? Last but not least, what should the pilot do if he finds himself in a quasi Catch-22 situation? For example, the pilot maybe self-aware that he is mentally unfit to fly the plane, but he is not in a position to reveal it as he risks losing his job and livelihood. If he does reveal it, he might be put on probation and eventually would lose the job. Or if the employer has lax standards, they may still allow the pilot – who might be undergoing treatment – to fly the plane.
Similar scenarios exist where soldiers’ mental judgments maybe compromised due to PTSD and yet continue to perform on the job volitionally or forced into one too many tours of duty due to lack of manpower. In a similar vein, the third case cited above, should the car be grounded and not be allowed on the road until the desired maintenance is performed so that it does not injure or cause harm?
Many traditional human factors programs at the graduate level do put a spot light on philosophy. It is usually the philosophy of science pertaining to foundations, methods, hypothesis testing, paradigm shifts and the evolution of the scientific method; very little ground, if any, is covered in areas of philosophy of technology, and more importantly, ethics.
So what is ethics. A good primer is provided by the Makkula Center for Applied Ethics here. I quote a part from this primer that is quite relevant to the practice of human factors:
Ethics is two things. First, ethics refers to well-founded standards of right and wrong that prescribe what humans ought to do, usually in terms of rights, obligations, benefits to society, fairness, or specific virtues. Ethics, for example, refers to those standards that impose the reasonable obligations to refrain from rape, stealing, murder, assault, slander, and fraud. Ethical standards also include those that enjoin virtues of honesty, compassion, and loyalty. And, ethical standards include standards relating to rights, such as the right to life, the right to freedom from injury, and the right to privacy. Such standards are adequate standards of ethics because they are supported by consistent and well-founded reasons.
To a human factors professional, normative ethics, rights, obligations, prevention of injury, privacy, etc., should not pose a problem. But what about applied ethics? Say, when privacy and injury, come into conflict with each other. The three cases that I cited above (Pilot, Soldier, Connected Car) bring the aforesaid conflict to the fore. In other words, is the human factors professional obligated to design a technology or a socio-technical system, which alerts the management or the regulator to abnormalities in the system by capturing and synthesizing data, across the information ecosystem. For example, pilot’s medical records pertaining to his visit to a psychiatrist to his performance on the job, including his off the job social interactions with colleagues? Even if there are aspects of which that may infringe on his privacy?
These are indeed difficult questions that may look intractable on the surface. But as human factors practitioners, sooner or later, we would be confronted with such questions. Thus it is imperative that ethics, including humanities, are included in human factors education and certification.
Moin Rahman is the Principal at HVHF Sciences. Moin has taught, researched and practiced human factor science and cognitive engineering over the last 20 years.
Because of my strong interest in ethics, Moin asked me to add some comments to his article. He raises some critical points, and the kind that are better off considered in the abstract before we have real incident to investigate that led to extensive damage or loss of life. We know that when these emotional events occur, it is hard for us to reason as carefully and diligently. That is one reason we have a Code of Ethics for Human Factors/Ergonomics.
Recent updates to the Germanwings story suggest that the pilot practiced crashing the plane in advance. If the STS was monitoring carefully, it might have identified this behavior and perhaps crossed referenced it with the pilot’s medical records to diagnose his true intentions. It could then have informed management. But at a very high cost to the pilot’s privacy. Immediately after the event, most of us would consider this an easy tradeoff to make. But in the abstract it looks very different. In Moin’s third example, do you want your car ratting you out that you are late on your latest oil change? To the Motor Vehicle authority? To your spouse?
I am guessing you are less sure about that last example. That is why we need to have this discussion now, before our cars are smart enough to really do it. To inform the design of the smart car/smart highway Internet of Things when human factors practitioners are faced with these choices.
Image credit: Brian Turner