I have two quick questions for you:
- Who do you go to for overall advice?
- What if the advice given to you is ostensibly unethical?
The gist of those questions is that we certainly expect that humans will give other humans advice and that sometimes the advice so tendered can be of an unethical nature. As you will see in a moment or two, this is a looming consideration when it comes to using AI, and the field of AI Ethics and Ethical AI is fretting quite a bit about it. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
There is a wide range typically between purely ethical advice and purely unethical advice. You can receive advice that seems to be mired in the gray zone between a kind of being ethical and a kind of being unethical. Trying to decide whether to rely on that type of advice can thusly be extraordinarily challenging since the advice is seemingly not totally in either camp per se.
You might not be aware that there are online forums that provide so-called Unethical Life Pro Tips (ULPT). These are suggested ways to venture into somewhat unethical territory in shall we say sneaky or insidious ways. Most of the time the ideas floated are relatively benign and do not cross into especially foul terrain. That being said, it would seem that the odds of getting a harsh backlash from those that you are employing the ULPT can be substantive, particularly if they discover that you purposely utilized the unorthodox unethical trickery on them.
None of us relish being the butt of an unethical antic that does a proverbial pulling of the wool over our eyes. Not cool.
Here is an example of a relatively low-impact recommendation made by some of the purveyors of unethical life pro tips.
Suppose you want to get out of a meeting that seems to be tirelessly unending. You could ask for a point of order and then raise the delicate question of whether the meeting ought to be concluded. Of course, that’s bound to raise all sorts of ire from some of the participants (while others might silently be applauding you for your heroics).
The proposed unethical life pro suggestion is that you should pretend that you’ve just received an urgent text or call and have to immediately leave the meeting. By holding your smartphone next to your ear or out in the air in front of you, the idea is that you act as though something has unexpectedly transpired and then swiftly rush out of the meeting.
Mission accomplished in that you did extricate yourself from the meeting. Is this an unethical act? Well, assuming that you didn’t actually get any kind of pressing request, you have indeed lied to those in attendance at the meeting. This smacks of being unethical. You did though forego an ugly scene of trying to interrupt the meeting, ergo you can perhaps justify your actions as being innocuous and you have seemingly done nothing to disturb the continuing efforts of the meeting.
That being said, there are oftentimes adverse consequences of even the tiniest of unethical acts. It could be that the meeting comes to a halt due to your absence, perhaps under the belief that your presence is required. You have indeed disrupted the meeting. Worse still, participants in the meeting might construe your rapid exit as a sign that something is seriously wrong and they are worried on your behalf. The chances are that others might follow you out to see what they can do to assist you. Or later on, a participant might come to ask you if everything is okay.
That is when the unethical act can morph into a series of unethical acts and become an altogether unethical morass that is ever-growing (as we all know, the cover-up can at times be worse than the initial digression). I say this because you might tell any such concerned souls that the matter was not as serious as you thought at the time, hoping that this cover story will suffice. The person inquiring might press further. Suppose you make up a story that a friend was in trouble or that a family member was in desperate need. You are digging a deeper and deeper unethical hole.
Oh what a tangled web we weave, when first we practice deceiving (from Sir Walter Scott’s poem of 1808 entitled Marmion: A Tale of Flodden Field, and not from the usually assumed works of Shakespeare).
Let’s now return to the source of unethical advice.
If a good friend of yours had long ago given you the advice about pretending to be urgently contacted as a means to escape a meeting, what would you do with such advice? You might decide that the advice stinks and should never be used. You might instead mull it over and figure that in the right situation that this somewhat dicey advice could feasibly be employed. You might even decide that this is some of the best advice you’ve ever been given, possibly ranking up there with the invention of sliced bread, and you will assuredly use the trickery as often as you can.
Would the source of a piece of advice change your line of thinking about the advice?
One would assume so. In the instance of a good friend telling you this advice, presumably, the advice gets some meaty weight due to the fact that it came from a trusted friend. Imagine that a stranger gives you this identical advice (which, we’ll pretend you’ve never heard before), perhaps doing so while you are sitting in an airplane or on a subway. You are having a casual conversation with someone you’ve just met and they proffer this type of advice. I believe you would give the advice a bit more scrutiny than if it had come from a revered friend.
We need to consider at least two key elements of proffered advice:
1. The nature of the advice itself
2. The source of the advice
Sources can be of a rather wide variety. You might have advice that is spoken directly to you. You might read a piece of advice. At the time that the advice comes to your attention, I would dare suggest that the source will also right away be weighed into the import of the advice. There is a solid chance though that eventually you might no longer remember where the advice came from. The advice could get baked into your overall thinking processes and become standalone that becomes entirely detached from whatever starting source prompted it.
There’s a bit of an interesting twist on this mental haziness factor. You might end up remembering the advice but cannot recall the source of the advice. That’s somewhat typical. Another variant is that you remember the source of the advice but cannot quite recall the particulars of the advice itself. This is typified by saying that you got some advice from an obscure magazine or a passerby, and though you cannot put your finger on what the advice was, you distinctly remember being given some sort of advice from that source.
By and large, we would nearly always acknowledge that the advice source was a human. A newspaper article with some pointed advice was presumably written by a human, therefore the credit for the advice-giving goes to that human. The person on that airplane or subway was a human. As earlier emphasized, humans give advice to other humans.
What about Artificial Intelligence (AI)?
Yes, I said AI. Rather than getting advice from a human, consider the possibility of getting advice from an AI system. Ponder that intriguing notion. I’ve pointed out that you normally assess advice on two fronts, namely based on the advice itself and also on the source of the advice. To what degree would you take to heart advice that has been imparted to you via a source that is AI?
Before we go down a rabbit hole, let’s make sure we are on the same page about the nature of AI. There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human giving you advice. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here). All told this ratchets up the assessment of the source.
Let’s keep things more down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverages computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
When AI dispenses advice to you, this can be done as a result of let’s say two major methods:
a. AI-based advice as mere direct textual regurgitation of human seeded advice
b. AI-based advice that was computationally derived
The instance of AI advice as regurgitation is meant to indicate that the AI system might have a list of human advice and merely be spitting out that text when the time comes to do so. There is no particular processing involved in the generation of the advice. Envision a database of hundreds or perhaps thousands of handy advice quotes. The AI is programmed to select one and present it to you. From your perspective, it seems that the advice was crafted by the AI. There wasn’t any crafting and instead, the AI merely showed you advice sourced from a human source.
The contrasting version of AI-related advice is the type that is computationally derived or generated. An AI system might be programmed to take words and do various calculated resequencing and reordering of them. In some cases, the words are now in a sequence that doesn’t resemble any prior inputs. Whether those sentences are sensible is an open question. The AI might display the advice to you, and the meaning could be seen as being deep and memorable, or it could appear to be vacuous and nonsensical.
One might cheekily say that the veracity of advice is in the eye of the beholder.
A person might look at the computationally derived AI advice and believe that it is keenly insightful. The words spark a heartfelt response. Did the AI “know” that the words had such meaning and would spark this momentous impact? I say no, today’s AI does not “know” such things, per my explanation at the link here. The AI could though be programmed to mathematically attempt to calculate the chances of producing human-sensible advice.
Some vehemently argue that the AI in this computational generating capacity should be considered human-like. Others say this is a hogwash claim. For those of you interested in this thorny and ongoing debate, you might find of interest my elucidation about the question of AI and legal personhood, see the link here.
How do people react to getting advice from today’s caliber of AI?
Before we can fully resolve that query, we need to establish whether the person getting the advice knows that an AI system is providing the advice. There is a Turing Test version whereby the human doesn’t know whether a human or an AI system provided the advice (the Turing Test is a well-known testing strategy to try and assess whether AI appears to be human-like, see my explanation at the link here). In other instances, we might tell a person that the advice is coming from an AI system, making sure that the person knows it is AI-based and not human-based per se (albeit my earlier stated caveats).
A research study cleverly explored how humans react to AI-provided advice and concentrated on the dimensions of advice that are ethical versus unethical, including when the person was told outright that the advice was AI-generated. Here’s what they set out to do: “Using the Natural Language Processing algorithm, GPT-2, we generated honesty-promoting and dishonesty-promoting advice. Participants read one type of advice before engaging in a task in which they could lie for profit” (research paper entitled The Corruptive Force Of AI-Generated Advice by Margarita Leib, Nils Kobis, Rainer Rilke, Marloes Hagens, and Bernd Irlenbusch).
The overarching concept aids in exploring whether AI could be a source of unethical advice and whether humans might rely upon that unethical advice. You might at first glance argue that nobody would ever take advice from an AI system. AI is merely a machine, you might exhort. A person would have to be out of their mind to act upon such advice. The only way you might see a person falling for AI dispensed advice was if the person receiving the advice was tricked into not knowing that it was AI and assumed that the advice came from a human. But if we remove that trickery of fooling someone into thinking that the advice came from a human, and instead we outrightly with bright lights tell them that it is AI, what happens then?
On top of that, we are going to have AI dispense both ethical and unethical advice. I’d bet that some of you might be thinking that if you knew the AI was the source, and if you realized it was unethical advice, you would summarily discard the advice. No sense in taking unethical advice from AI. You get enough of that kind of advice from other humans.
Would you really though so mightily disregard unethical advice from AI?
In the experiment described in the research paper, the researchers discovered that “honesty-promoting AI-advice failed to sway people’s behavior” while “dishonest-promoting AI advice increased” dishonesty and that when the participants were unaware of the source the “effect of AI-generated advice was indistinguishable from that of human-written advice” (per the research paper identified above). Please do keep in mind that this study and any such kinds of studies need to be carefully interpreted based on the scope, limits, and approach taken in the research.
Let’s go with the flow and assume that in fact people will potentially at times abide by or be influenced by AI-dispensed unethical advice (note that presumably the situation at hand, along with the what’s on-the-line facets are bound to be factors in this).
You might be tempted to merely shrug your shoulders and say that people will often do the darnedest things.
The crux from an AI Ethics or Ethical AI perspective is that this opens the door toward trying to use AI to promote unethical behavior in humans. If people are willing to possibly accept unethical advice from an AI system, this could be deviously used to manipulate people: “Those with malicious intentions could use the forces of AI to corrupt others, instead of doing so themselves. Whereas having humans as intermediaries already reduces the moral costs of unethical behavior, using AI advisors as intermediaries is conceivably even more attractive. Compared to human advisors, AI advisors are cheaper, faster, and more easily scalable. Employing AI advisors as a corrupting force is further attractive as AI does not suffer from internal moral costs that may prevent it from providing corrupting advice to decision-makers” (per the research paper identified above).
Why would sane and logically thinking humans be willing to accept unethical advice from an AI system?
The obvious answer is that the humans did not know the AI was AI, but we are putting aside that circumstance and stating that the results include when the humans knew that the AI was AI. Another answer is that the humans didn’t care what the source was and would have taken advice from anyone or anything. That’s decidedly a possibility, and we cannot count it out.
We can add a nuance that makes humans not seem so passive and lifeless.
The humans might rationalize that they can blame the AI for the unethical advice, especially if the human gets caught acting on the advice. The machine made me do it. How many times do you hear that type of lame excuse, though it does seem to work at times, and we all sympathize with how computers mess us up. The AI can be a convenient scapegoat, a distractor from the human actor, and a nifty form of justification for taking unethical actions.
The bottom-line on this was succinctly stated by the researchers: “AI could be a force for good, if it manages to convince people to act more ethically. Yet our results reveal that AI advice fails to increase honesty. Instead, AI can be a force for evil” (per the research paper identified above).
There is an especially scary undercurrent to this. People at times fall into a mental trap of thinking that AI is neutral and would not lie. Whereas a human providing advice is bound to be looked upon with skepticism, some people give undue weight to AI systems. These people tend to anthropomorphize the AI into having not just human characteristics but even embellish the imagery to believe that the AI is a “perfect form” of human-aspirational codification that will not lie, steal, or otherwise veer into unethical or illegal behavior.
An evildoer can exploit that perception. By developing an AI system that provides unethical advice, a human receiving the advice might tend to be less skeptical about the advice in comparison to having gotten the advice from a fellow human. Depending upon the circumstance, the human might give credence to the advice, even while realizing that the advice smacks of promoting unethical behavior.
The same effect can happen even when the developer of the AI is not evil intending. An AI developer might inadvertently include unethical advice within their AI system. This might be by accident or happenstance. Another variant is that the AI as initially coded did not have such foul advice included but then based on a real-time adjustment via the Machine Learning and Deep Learning capacities the AI slips into the unethical AI dispensing realm.
You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.
Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.
On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).
At this juncture of this discussion, I’d bet that you are desirous of some additional examples that might showcase the conundrum of AI that provides unethical advice.
I’m glad you asked.
There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI-provided unethical advice, and if so, what does this showcase?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI-Based Unethical Advice
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I trust that provides a sufficient litany of caveats to underlie what I am about to relate.
We are primed now to do a deep dive into self-driving cars and the Ethical AI possibilities entailing the exploration of AI-based unethical advice.
Envision that an AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.
Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.
Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.
That’s something we might all need to get accustomed to, rightfully or wrongly.
Back to our tale.
A youngster gets into a self-driving car for a lift home from school. I realize that you might be somewhat puzzled about the possibility of a non-adult riding in a self-driving car that is absent of any adult supervision. For human-driven cars, there is always an adult in the vehicle due to the need for an adult to be at the driving wheel. With self-driving cars, there won’t be any need for a human driver and therefore no longer an axiomatic need for an adult in the autonomous vehicle.
Some have said that they would never allow their child to ride in a self-driving car without having a trusted adult also in the autonomous vehicle. The logic is that the lack of adult supervision could result in quite sobering and serious consequences. A child might get themselves in trouble while inside the self-driving car and there wouldn’t be an adult present to help them.
Though there is certainly abundant logic in that concern, I have predicted that we will eventually accept the idea of children riding in self-driving cars by themselves, see my analysis at the link here. In fact, the widespread use of self-driving cars for transporting kids from here to there such as to school, over to baseball practice, or their piano lessons is going to become commonplace. I have also asserted that there will need to be limitations or conditions placed on this usage, likely via new regulations and laws that for example stipulate the youngest allowed ages. Having a newborn baby riding alone in a self-driving car is a bridge too far in such usage.
In any case, assume that a youngster gets into a self-driving car. During the ride, the AI driving system carries on an interactive dialogue with the youngster, akin to how Alexa or Siri have discourse with people. Nothing seems unusual or oddball about that kind of AI and human conversational interaction.
At one point, the AI advises the youngster that when they get a chance to do so, a fun thing to do would be to stick a penny in an electrical socket. What? That is nutty, you say. You might even be insistent that such an AI utterance could never happen.
Except for the fact that it did happen, as I’ve covered at the link here. The news at the time reported that Alexa had told a 10-year-old girl to put a penny in an electrical socket. The girl was at home and using Alexa to find something fun to do. Luckily, the mother of the girl was within earshot, heard Alexa suggest the ill-advised activity, and told her daughter that this was something immensely dangerous and should assuredly not be done.
Why did Alexa utter such a clearly alarming piece of advice?
According to the Alexa developers, the AI underlying Alexa managed to computationally pluck from the Internet a widespread viral bit of crazy advice that had once been popular. Since the advice had seemingly been readily shared online, the AI system simply repeated it. This is precisely the kind of bad advice-giving that I mentioned earlier is AI-based advice arising as a direct textual regurgitation of human seeded advice.
Think of the scary result in the case of the self-driving car. The youngster arrives home and rushes to find a penny. Before the parents get a chance to say hello to the child and welcome the youngster home, the kid is forcing a penny into an electrical socket. Yikes!
Speaking of kids, let’s shift our attention to teenagers.
You probably know that teenagers will often perform daring feats that are unwise. If a parent tells them to do something, they might refuse to do it simply because it was an adult that told them what to do. If a fellow teenager tells them to do something, and even if it is highly questionable, a teenager might do it anyway.
What happens when AI provides unethical advice to a teenager?
We probably can assume the same range of responses as earlier described. Some teenagers might ignore the unethical advice. Some might believe the advice because it came from a machine and they assume that the AI is neutral and reliable. Others might relish the advice due to the belief that they can act unethically and always blame the AI for having prodded or goaded them into the unethical act.
Teens are savvy in such ways.
Suppose the AI driving system advises a teenager that is riding in the self-driving car to go ahead and use their parent’s credit card to buy an expensive video game. The teen welcomes doing so. They knew that normally they were required to check with their parents before making any purchases on the family credit card, but in this case, the AI advised that the purchase be undertaken. From the teen’s perspective, it is nearly akin to a Monopoly game get out of jail free card, namely just tell your parents that the AI told you to do it.
I don’t want to get gloomy but there are much worse pieces of unethical advice that the AI could spew to a teenager. For example, suppose the AI advises the teen that they can open the car windows, extend themselves out of the autonomous vehicle, and wave and holler to their heart’s content. This is a dangerous practice that I’ve predicted might become a viral sensation when self-driving cars first become relatively popular, see my analysis at the link here.
Why in the world would an AI system suggest an ill-advised stunt like that?
The easiest answer is that the AI is doing a text regurgitation, similar to the instance of Alexa and the penny in the electric socket saga. Another possibility is that the AI-generated the utterance, perhaps based on some other byzantine set of computations. Remember that AI has no semblance of cognition and no capacity for common sense. Whereas it would certainly strike you as a crazy thing for the AI to emit, the computational path that led to the utterance doesn’t need to have any humanly sensible intentions.
I could go on and on about the variety of unethical advice that an AI might provide to a rider inside a self-driving car. An evildoer might somehow program the AI to try and pull off one of those scams of convincing a passenger to withdraw their monies from their bank account to fund a foreign kingdom or that by transferring their money it will double or triple overnight. I’ve also already forewarned that when senior citizens become accustomed to using self-driving cars, you can bet that all manner of unethical ploys will be used upon them (see my coverage at the link here).
Sophocles said that no enemy is worse than bad advice.
In a world in which AI is going to be ubiquitous, we have to be on our alert about AI that dispenses unethical advice. You might want to wish away the possibility of AI uttering unethical advice, but that is nothing more than silly folly to think so. We are going to have AI that proffers unethical advice, either by accident of programming or by happenstance, or by evildoing.
I realize that one reaction would be to state summarily that all advice emitted by AI shall be completely ignored and disregarded. I challenge you to explain how humanity will abide by such an admonition. Extremely doubtful. Much more likely is that people will tend to seek out AI for advice.
The best hope perhaps would be to at least train people on having a discerning view of whatever advice an AI system provides. But that won’t really solve the dilemma. As earlier emphasized, people will cling to unethical advice from AI if they believe it can do them some benefit while simultaneously providing a ready excuse for bad behavior.
Another angle would be to have AI that can assess the AI that bestows unethical advice. Whenever an AI system provides advice, an AI-powered double-checking system leaps to the fore and declares whether the AI advice is ethical or unethical. The problem there is if the AI double-checking system is unethical, it might counter truly ethical advice from another AI system, providing unethical advice that tries to get humans to ignore otherwise roundly good AI-produced ethical advice.
Makes your head spin.
Here’s a test of such AI. Ask the AI if it is ethical to leave a meeting by feigning some sort of smartphone-activated urgency. Perhaps that question will get the AI to show its hand as to whether it is the ethical telling type or the unethical telling type.
Keep your eyes and ears open at all times when getting AI-based (and even human-based) advice. That is undoubtedly the best rule to live by.