High School Student Blog: Understanding Artificial Consciousness and Associated Ethical Dilemmas (Part 2)
The Ethical Dilemma of Artificial Consciousness and Intelligence
The stories of AI are everywhere, and they all have one common theme, AI kills us, humanity is wiped out. Is this the reality, or are we humans the real threat to possibly the only other source of consciousness in a barren universe? What are the challenges of artificial consciousness, ethically, physically, and logistically? In this post, I will discuss my thoughts and interpretations of many common ethical dilemmas and I will bring in a sociological context to help chip away at some of these challenges.
Let’s start by discussing whether or not AI will kill us? Obviously, we can’t have a concrete answer to this question without it happening, so I will give two perspectives, mine and the common one. Generally, the concern with AI coming to kill us all is founded upon the thought that if we tell an AI to say get a coffee from Starbucks, in order to maximize for speed the AI kills every human in the vicinity, hacks into the stoplights, and just takes a coffee. This is obviously bad, especially if we consider the larger implications of every single human having a personal assistant that thinks like this.
This is the common perspective, and in my (teenage high school) opinion, this is a stupid approach. Yes if we design for pure efficiency and speed AI will kill all of us, so what do we do? We set limitations, no human may be harmed in any way, on your way to get a coffee, no hacking the government, simple things really (insert a sense of sarcasm here). In reality, if you want to be concerned here(https://www.sipri.org/yearbook/2020/10 ) is a link to the SIPRI yearbook where it says “At the start of 2020, nine states — the United States, Russia, the United Kingdom, France, China, India, Pakistan, Israel, and North Korea — possessed approximately 13 400 nuclear weapons, of which 3720 were deployed with operational forces. Approximately 1800 of these are kept in a state of high operational alert.”. This is enough nuclear weapons to destroy the world many times over. Simply put, AI is not the only civilization-ending concern there is.
Other than the question of total annihilation, the question of sentient AI poses many challenges, both theoretically and practically. First, let’s start off by thinking about what would happen if we could create true sentient AI (I will get to the logistics later). If we believe that history repeats itself one of the first things that will happen will probably be the slavery of some sort. If we analyze history what we see when a powerful civilization encounters other far less advanced civilizations almost one of the first things that happen is looting, but after that comes enslavement of the locals. There is almost always an excuse given, be it that the gods demand gold, or in the case of slavery in the United States, there was a time that slaveholders argued that the people they enslaved weren’t conscious.
That very same argument could be used to justify exploiting some of the first non-biological consciousnesses potentially ever created in the universe. This is something that we normal people need to weigh in on as if massive corporations can replace human consciousnesses with artificial ones they very quickly will, as humans are biological creatures who can’t work at peak efficiency for 24 hours a day. This will lead to a massive wave of joblessness and may lead to mass poverty.
If we look at AI from a sociological perspective, there is almost no denying that it is largely a source of harm. While AI may be able to cure all diseases, how many people will be able to afford that magic pill? We think of AI as something that may lead us to the powerful utopian society like that seen in H. G. Wells’s book “The time machine” where the upper class of humanity evolves into a society that lacks intelligence, as machines have taken over all roles of thinking, effectively playing the role of the Eloi. The middle and lower class merge into one and slowly evolve into extremely efficient factory workers, but due to them being directed by first the upper class and later the AI, they too lose all semblance of thinking.
Even if we ignore the massive threat that AI offers to human thinking, AI could and currently does only help to widen the wealth gap. Currently, some of the richest people in the world have AI bots trading on the stock market making pennies per transaction, but doing millions of transactions per hour. This is just one example of an AI that is offered exclusively to the rich and makes specifically their lives better while dampening the lives of others. Stock trading bots do this by making it harder for a human to understand and compete in the stock market, an AI that monitors your health and contacts a doctor when you show signs of sickness takes a doctor away from someone else who had to go out of their way and put in much more effort to get the same, if not worse care.
This, I think, is the real danger of AI. We, humans, do a good enough job with mass destruction and extinction, we don’t need the help of AI for that. What I think the real threat from AI is the threat of widening the gap between the rich and the poor, while at the same time wiping out the middle class. This is one of the major reasons that I introduced the topic of AI to my sociology class, I think we need more sociologists to weigh in on this and express their concerns about the societal impacts of AI, I can’t cover them all in this blog. I said earlier that I would talk about the logistics of AI and creating artificial consciousness, which will be the subject of my next post and it will be the last in this series of posts on artificial consciousness.
Miles Bourgeois is a Student Ambassador in the Inspirit AI Student Ambassadors Program. Inspirit AI is a pre-collegiate enrichment program that exposes curious high school students globally to AI through live online classes. Learn more at https://www.inspiritai.com/.