Several recent airliner accident reports have identified problems with cockpit automation as principal or contributory causes of the accidents. Much of the conventional reaction (especially by pilots) to these incidents is of the "automation must be stopped" or "automation has gone too far" variety. That reaction, in human terms, is easy to understand, but it is less easy to accept. What is really needed is an overhaul of the relationship between human beings and automated systems.

While the modern airliner could not and would not operate effectively without a high degree of automation, there is no airliner in service, which cannot be operated in emergency with no automation at all. This is the essential difference between fundamentally stable airliners and fundamentally unstable modern combat aircraft.

The signal achievement in automation in the last 20 years has been, to raise a generation of pilots, who were taught to fly with little or no automation to delegate functional authority to automation. The signal failure has been to leave those who have delegated functional authority with a feeling of retained executive responsibility and with the confidence that they can regain functional authority on demand.

The challenge in human factors is to ensure that pilots regard automation as something there to help them, not to supersede them. While an automated system may be doing a job on behalf of a pilot, the pilot must still know what the aircraft both should be and is doing.

Ordinarily, the computers report back via screens that everything is proceeding smoothly. Ordinarily, the pilot can decide from his own feedback (looking out the window, watching the instruments) that the computers are right. The trick lies in deciding, from that feedback, that the ordinary may no longer apply.

The difficulty for some pilots (maybe a great many pilots) seems to lie in accepting that the computer to which they have delegated so much authority, and in which they have invested so much trust, may be wrong. If a first officer makes a mistake, most pilots will take action to correct it or take back control, themselves. Few seem willing to act so decisively or hastily when a computer makes or seems to have made a mistake.

Every airliner has standby instruments, driven by sensors and power sources other than those, which feed the computers. Those standby instruments give a pilot little more information than attitude, altitude, heading and speed, but that should be enough with which at least to keep the aircraft stable and in the air. (They are, after all, no different from the basic instruments with which he first learned to fly.)

The captain is still the captain, and computers and crew alike still report to him. The captain's role is to ensure that those to whom he has delegated authority - computers and crew alike - are doing their jobs correctly. The captain is in that role because he should be experienced enough to understand what a subordinate is doing - or doing wrong - and mature enough to try to compensate for or correct the error without getting into a fight with that subordinate. The question is: can the average captain deal with a computer as well as he can deal with a crew-member?

Several of the recent incidents suggest not. Instead there seems to be a disturbing trend towards the assumption that not only is the computer always right (even in the light of contradictory physical evidence), but also that once there is a problem with a computer there is a terminal problem with the aircraft and crew which it is meant to be serving.

This is a problem of the inadequacy not so much of pilots as of their training. If there is a lesson here, it must be that all pilots are taught, recurrently, one set of rules, and one principle.

"The captain is still the captain; computers and crew still report to him"

Source: Flight International