In 1942 Isaac Asimov, in a science fiction story called “Runaround”, introduced three laws that the robots in his stories must obey. The first law is simply this: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” A simplified form is simply: “A robot may not harm a human being.”.
Are these not fictional laws for fictional robots? Certainly. Yet, as robotics and the various bio-engineering sciences advance their technologies and create devices with increasing ability to act autonomously and “intelligently”, the ethical debates around such artificially intelligent devices, the ethics of their decision processes and the moral responsibility of their actions are increasingly important. Asimov’s laws of robotics sometimes come up in such discussions, and the first law seems obviously desirable. Some robotic systems have powerful mechanisms that can kill or maim a human; such a robotic device must be able to sense a human presence and cease action or avoid action in that area. Heuristic systems that can “learn” and self-modify their own decision making processes are particularly important in such discussions. Clearly one of the limiting parameters on a heuristic robot is that it must not learn to harm a human; positive controls must be in place such that it cannot learn such.
A few days ago The New York Times published a story on the long history of cover-ups and prevarications of the cardiac dangers of the diabetes drug Avandia by its maker, GlaxoSmithKline. There seem to be real questions under discussion about how to interpret drug test data, and sometimes a medical cure may involve an inadvertent harm that, given present technology, is unavoidable (e.g. the side effects of cancer therapy). Such discussions aside, from the internal memoranda GlaxoSmithKline knew of the problems for years and did their best to conceal their findings. They acted similarly with their drug Paxil, with its increase in teen suicidal thought and behavior.
Might it be too much to require the management of drug companies to adhere to the first law of robotics? How is it that such a constraint applies categorically to robots and not to humans who, with far greater impact and responsibility, daily make decisions about the products they produce that can indeed harm, or even kill, humans? After all, if we find such a law desirable and necessary for robots, surely it should a fortiori apply to humans, and above all to those in the healing professions.