explicitClick to confirm you are 18+

Taking the Human Element out of Decision Making (Danger of A.I)

GrimKnowledgeAug 4, 2019, 1:30:34 AM
thumb_up12thumb_downmore_vert

     "With Artificial Intelligence we are summoning the Demon."  - Elon Musk

     Love him or hate him, Elon Musk garnered a lot of attention with this statement over a year ago and helped further the discussion. My personal belief is that this statement is somewhat hyperbolic but that there are still some very real world problems with A.I. The biggest problem being how it is implemented and what it is used for.

     After Musk made the statement, many "A.I. Experts" came out of the woodwork to denounce the statement and extoll the benefits of A.I. They waxed poetically about the many benefits that A.I. will bring to our lives. These are people with years of rigorous academic study and ingenious projects and discoveries under their belts. Yet I choose to put "A.I. Expert" in quotes. Why? Is it because I do not believe they are as intelligent as they make themselves out to be? Am I trying to insult them? No. I put it in quotes because no one has produced a true human analogous thinking machine and many of these experts seem either overly optimistic or pessimistic. I put it in quotes because this field is still very much in its infancy despite the years of study already done.

     Before I wrote this I listened to many conversations and read many articles. I invite you to do they same and listen to podcast after podcast and read interview after interview with every "A.I. Expert" you can find. Or, because you are a normal person with things to do, you can read this blog and get a few basics (very basic, basics). I even tagged four videos at the bottom if you feel like you need more information. 

     I chose as a focus of this blog Lex Fridman. After listening to several experts and reading several articles, I found him to be one of the easiest going and most willing to discuss both the positives and negatives of A.I. He also seems the most willing to entertain new ideas and least likely to produce a "sacred cow" (The habit of some scientists to become religiously attached to their theory or discipline to the point of becoming narrow minded and excluding contradictory evidence and/or people as sins and sinners). Mr. Fridman is working on a "Human Centered A.I." model and that is where my interest lies. He does, however, get somewhat defensive in his interview with Elon Musk.

     The Joe Rogan Experience Podcast with Lex Fridman and the short video of Fridman testing a Tesla/MIT semi-autonomous car are recommended. The third video is a lecture Fridman gave on Deep Learning. That lecture is only for those who are very interested in the technical/programming background of A.I. and are slightly masochistic. There are a series of these videos if you would like to punish yourself. Last is a discussion between Elon Musk and Fridman about A.I. This video is just over 30 min and highlights the disagreements between some experts and developers of this technology. Though Musk's cadence, mannerisms, and affect are very strange and make the video painful to watch, it is an interesting interview/discussion.

     In the JRE podcast, Fridman does a good job of explaining the complaints and worries that many have about A.I. and Rogan does a good job of challenging him about the dangers of A.I. and the common fears people have. The two most common fears I see brought up are, first, the far reaching fear that A.I. will take over the world and/or decide humans are a virus and kill us all and, second, so called A.I. bias. The later of which, in my opinion, Fridman did a poor job of explaining or addressing. It involves the A.I. making decisions based solely on the data provided and being called racist. Look into the Racist Google A.I. that was taken offline because the programs didn't give the A.I. context and it began saying racist things. One hypothetical example always given is that if an A.I. were to decide who gets bank loans African Americans would be discriminated against if the A.I. is fed statistics about African American loan default and employment statistics. The A.I. can not understand context or see individuals very easily.

     In the first scenario, A.I. kills us all and/or decides to enslave us, there is no consensus or even a remote agreement about the feasibility of this scenario. The discussion often goes into the weeds about "narrow" versus "general" A.I. Narrow A.I. is designed and programmed to do a specific task or a series of specific tasks. It has a narrow purpose and programming, such as an autonomous car. The focus of that car is driving and moving through the environment safely. Unless the car is programmed to change its own fundamental programing and the parameters by which is interprets data, there is little risk of this technology taking over the world. General A.I. could be defined as a general intelligence and may be programed to learn and evaluate its own programming and change its thinking and behavior. This is the A.I. most regular people have concerns about and many intellectuals and academics are either very concerned about or apathetic to. I do not see many in the middle.

     The truth is, that the vast majority of the A.I. technology that is being implemented or talked about is of the narrow variety and does not pose the big threat of world domination, in theory. The glaring issue I have with this technology, however, has more to do with what tasks the A.I. is given and how it has been implemented. The A.I. is being tasked with responsibilities that involve human safety without enough testing and time put into it. There have been several Google and Uber automated cars involved in accidents including the Uber Automated car that ran over a pedestrian in March of 2018 killing her. The picture in the banner of this post is of a Tesla that slammed into the back of a Fire Truck last year. The Tesla official line at the time was  that the technology is to be augmented by the human driver and the human driver needs to be paying attention and taking over if need be. This ignores very basic human nature. Anyone who can think of the problem for a minute will realize that many drivers will not pay attention if they feel even slightest comfort in letting the car do the work. But that isn't even the biggest issue.

     First and foremost, the biggest issue with A.I. is that it can only make decisions based on the data it is fed or can collect or how that data is defined and the parameters it has been given for decision making. Any data an A.I. can use to make decisions has to come from initial, human, data entry or by any sensors that are capable of feeding information to the A.I., such as cameras, RADAR, GPS, ultra sonic sensors, etc. If the data set fed to it does not contain enough information or the correct types of information, the A.I. cannot make the correct decision. Period. At this point in time, A.I. cannot create new data from other data. It can only process the data it has using the parameters it was given for interpreting that Data.

     Confused? Well I will try to simplify and use to a few very real world examples. If the data fed to the car's A.I. systems is perfect but the programmer failed to tell the car what a red light meant (incorrect or missing parameters), the consequences could be disastrous unless the human hits the brake. Now what if the parameters are all correct (it knows what a red light is and how to react to it) but the data being fed to the A.I. is wrong. There was a famous incident where some trollish hooligans used lasers to confuse and damage an autonomous car. The lasers confused the cars sensors. Desert mirage, or the heat rising from the ground which causes the wavering appearance to the horizon, has caused issues with autonomous car sensors in desert climates. Weather can cause issues. Ice build up can cause issues.

     There needs to be a tremendous amount of testing in both simulated and real environments to try and put the A.I. in as many different situations as possible to collect enough data to make the cars safe. This has not been done to the extent that is needs to be. The technology was allowed on our streets in its infancy and before it should have been and, as a result, there have been injuries and deaths. This is the major problem with the rush to embrace new technology. Many times technology is rushed into production and allowed into the market before it is ready and consumers and those around them pay the costs.This is an example of that. Tesla has released several updates since the autopilot came out and those updates were mostly based on the data collected from the cars driving around. All of us as guinea pigs.

     Another example that everyone should know about, but almost no one does, is methyl tert-butyl ether (MTBE). It was a fuel additive that lowered emissions and made cars pollute less. It was rushed out due to pressure from green lobbying groups and then just a few years later banned and removed from gasoline. Why? Because just a table spoon of the stuff could poison thousands of gallons of ground water. PG&E also rushed out its huge solar collectors and power station out in California with dozens of flowery promises. After it was brought online, customers saw skyrocketing energy bills and scientists saw thousands of birds being fried by the collectors. It took millions and tons of effort to retro fit the collectors with sound makers, spikes, and other deterrents to stop the wholesale slaughter of birds. Now only a few hundred die every year from the collectors instead of several thousand. 

     Autonomous cars definitely feel like an example of rushed technology. In the discussion between Musk and Fridman, there is a slight admission from both men about the "uncertainty" of the A.I. and why they do not show customers the truth about how the computer "sees" they world. In the newer Tesla Model 3 there is a sensor console that will show you an optimized, cleaned up version of the computer's "vision". There is a car in front of you and you can look out the windshield and see it and compare it to what the computer "sees". The admission is that there is uncertainty in the edge detection of objects and their spacial position in relation to the car. Musk says that normal people wouldn't be able to understand what they are seeing if they showed anymore information. I happen to agree with this assessment. I am not an engineer and probably would not understand what I was seeing. The admission though, is that there is uncertainty. The uncertainty may be only measured in a few centimeters or an inch, but it is there. I believe this is a discussion that needs to be had in more detail and to be had with the general public. Too many issues exist.

    In closing, I wonder about all the technologies that narrow A.I. may be tasked with operating. There are a lot of technologies that should never be given to an A.I. or at the very least the A.I. can present the situation to a human for final decision before the A.I. continues with the task. 

     Two very real historical examples that could have gone wrong if left to A.I. involve nuclear war and the death of all life on earth. So many people walk the earth with no concept of these two incidents that were only stopped because of human hesitation. The first occurred on the 5th of October 1960 RADAR equipment in Thule, Greenland told operators that the Soviets had just launched what looked to be a hundreds of Intercontinental Ballistic Missiles (ICBMs) with nuclear warheads. Procedure was to inform NORAD who would then call for the launch of US missiles and assure mutual nuclear annihilation. The officer at NORAD hesitated and thought about the situation. He remembered seeing a news report that Nikita Khrushchev was to give a speech in New York as head of the USSR's United Nations delegation. He ordered the men to go outside and see what was happening. The RADAR equipment was still fairly new technology and mistakenly interpreted a moonrise over Norway as a large-scale Soviet missile launch. Other methods of confirmation were ordered and NORAD cancelled the alarm. The second incident happened on the 26th of September 1983. The Soviet Union's nuclear early-warning system reported the launch of multiple US ICBMs from bases in the US. The missile attack warnings were thought by Stanislav Yevgrafovich Petrov, an officer of the Soviet Air Defense Forces to be false alarms. He made the very human decision to ignore the alarms and say they were a false alarm. He chose not to report the alarms to his command. This decision may well have saved all life on earth. The system he was working with told him it was time to initiate full scale nuclear holocaust. Later investigation of the satellite warning system later confirmed that the system had malfunctioned.

     But what about A.I.? It can only make decisions based on the data it receives and the parameters that are set for it. If a fully autonomous A.I., with no human decision making, were in charge of both of these situations, would we still be here?