People want to know why self driving cars cannot work, so I am going to try and tell you why.
But I am not going to solely use arguments that I have used in the past which center around the inability of a program or a computer to figure out every contingency that can arise.
First, does a program of artificial intelligence have free will? This is important to ask because human beings do have free will. If the program of artificial intelligence does not have free will, then the program will always be one step behind the human being who will always possess the free will to step outside the program.
Mathematically no set can have itself as a member; one can never get above one’s self; a dog can not catch its tail, blah, blah, blah.
If the artificial intelligence program does have free will, then the program has the free will to destroy the human being who is riding inside the car.
Both conditions result in an acceptable outcomes for the human being.
In both cases the human being is handing over the security of his or her life to a machine. If the machine were only making non-life-threatening decisions there would be no consequence, but the machine is making life-threatening decisions. The human being is traveling down the road at 60 miles per hour inside a 3,000 pound vehicle.
Will the artificial intelligence program be able to make value judgments?
For example, human beings are taught not to swerve if there is a small animal in the road. Suppose the small animal is instead a small child. Let us shakily assume that the program will be able to make the distinction. Will the car swerve?
Suppose, to the side of the vehicle are two old men. What will the program do then? Will the program swerve and save the young child? Or will the program not swerve by making a cold calculation that one person dead is better than two people dead?
Suppose the men are in their 80s. Will the program calculate their estimated usefulness to society before swerving or not swerving into them? What if they are in their 50s?
Suppose the two old men to the side have machine guns and are shooting at cars as they go by? What then? Will that factor into the program’s decision to swerve?
Suppose those cars at whom the old men are shooting have ISIS flags plus passengers who are shooting randomly at people including the two old men on the side of the road who are only defending themselves. What will the program then do?
Now suppose the driver of the original car is a member of ISIS? Now suppose that he or she isn’t.
Will the program have the experience biologically and culturally to make distinctions that are relevant?
Given that the program will be built into the car and not the driver, will the program be relevant to the driver and his or her values?
Who will decide the values of the vehicle? As it stands now, an automobile is value neutral.
For artificial intelligence to work on a massive scale, information must be shared between vehicles.
That is the point of one form of artificial intelligence: to take an infinite number of encounters and share them so as to prevent mistakes.
Yet if the intelligence of the vehicle is tied into the values of the individual, then what good is sharing the information? Such information would not only be useless but potentially counterproductive to the individual.
The vehicle would have to be able to read the experience, values, mind and mood of the individual. That individual’s desires may change from day to day depending upon an infinite number of variables – mood, music, weather, bodily pain, life events.
Do I really need to go any further?
Conclusion: Artificial intelligence employed for the purpose of controlling the entire driving experience in an open system is inherently flawed and dangerous. The flaws are not remediable. Hence artificial intelligence should be employed in vehicles as an adjunctive tool to prevent driving mistakes.
Sorry.
Sincerely,
Archer Crosley, MD
Copyright 2020 Archer Crosley All Rights Reserved
One thought on “AI and Self-Driving Cars”