Discussing the Ethics of AI: Day-to-Day Life

There are so many things we do in our day-to-day lives that use AI. Now, I'm not saying AI is a problem. Though it could be, that's a discussion for another day. But this blog will dive into the ethical issues of having specific machines that we use in our day-to-day lives. We will touch upon the topics of credit, responsibility, and privacy.

Chat GPT

Chat GPT is an AI Chatbot that is capable of human-like conversations. It can answer questions, refer to websites, and so much more. This kind of invention is life-changing, but it also has its concerns. 

GENERAL USERS

Some ethical concerns for general users could come from privacy since Chat GPT requires PII(Personally Identifiable Information) to perform its function, which could raise concerns. No one wants their personal information out in the open. Even the thought of having my personal information out there scares me. Users may not know who can access this information, how it is used, etc. Since privacy breaches trust, it shows no respect for the moral law. That is what makes this an ethical issue. 

TEACHER & STUDENT

These days students are notoriously known for using Chat GPT in their essays, letters, homework, and so much more. It is to the point where it's a question whether this AI revolution is benefiting us. Plagiarizing is obviously wrong. But is it called plagiarizing if you give the inputs to the AI and it tailors a response for you? That is what makes this confusing. AI is becoming a massive part of our lives. At one point, we need to learn, not to fight it but to work with it. So letting students use Chat GPT shouldn’t be a problem, but the debate is that it takes away the student's ability to be creative, unique, and intuitive. 

Interested in our online AI coding program for middle & high school students? Enter your email below for program enrollment, updates & more!

   

Self Driving Cars

Self-driving cars are pretty self-explanatory. They are cars that have the feature to drive safely by themselves. They don't need human input while driving and can manage safe driving alone through sensors and cameras.

One of the most significant conflicts about self-driving cars is if it causes a diffusion of responsibility. With self-driving vehicles, collisions can be prevented in a quick second. The ability to have a vehicle that prioritizes your safety and can get you to your destination on time is revolutionary. But this idea has left so many of us unintentionally willing to take more risks. Since we know that our car has our back, we may take certain risks like increasing the speed, putting on autopilot while reaching for something in the back, looking in the mirror while on the freeway, and so much more. These are risks we wouldn't even consider taking if we knew that our actions have a singular influence on our consequences. But know that there is another factor involved that creates an imbalance in the ethics of humanity. We are more willing to put people's lives in danger because we trust AI so much. But what if AI were to make a mistake? A miscalculation?

That's where we explore the other ethical issue. Who is responsible for AI's mistakes? If, let's say, our self-driving car were to miscalculate the traffic light as another object and collide with a vehicle in the intersection. Is that the person's fault for not taking control? Is it the programmer's fault for not testing diverse samples? Whose fault is it? If we can’t pinpoint someone to blame, this can eventually disrupt our society. 

Alexa

Alexa is a service bot that is voice automated and can do just about anything. But since it is voice automated, there will automatically be some concern about how much the device can hear.

Alexa raises significant ethical concerns regarding privacy. Since Alexa is voice automated, it is clearly always listening. It's listening for its keyword "Alexa." Researchers have discovered that Alexa does listen to our conversations and uses them to train itself. Now, I'm not going to lie; this is pretty smart. This allows Alexa to increase its diverse testing data, and it allows it to function effectively. But it's not fair to the customers. The hacker could easily attain very personal information if this device was somehow hacked. And since this device is used worldwide, that is highly concerning. 

Another reason why users believe that Alexa uses our indirect input is through ads. For example, if one day you talk to your family about shoes in front of Alexa without mentioning her name, you'll start to see ads about shoes on your other devices immediately. This is unethical and upsetting. Alexa is a great tool to help people manage their lives organizationally. But the fact that the company uses unauthorized input to control how a person perceives things is unjustifiable. Users may want an item but not buy it because they are trying to be responsible with their money. But if Alexa starts to show ads about the item constantly, it may interfere with the user's original thoughts and take over their life. 

Bias

Another big concern is bias in AI. This happens whenever a program doesn't have a diverse dataset and is biased and discriminatory to specific groups of people. For example, if there is a skin cancer detection program, and the machine doesn't detect cancer on darker skin, this may be because the training data isn't diverse enough for the computer to recognize. Unfortunately, similar biases like these happen regularly in AI. This can be an ethical concern because it promotes unequal treatment and isn't fair to specific groups.

Conclusion

At the end of the day, AI will continue to advance. However, as it requires more features to operate efficiently, we must be more aware of our privacy. There are numerous ethical concerns surrounding AI today. In the blog, we only tackled a few of them. But as far as we have tackled it, it is evident that this problem won't have a clear ending. AI will continue to take our privacy and human-gifted creativity, to further the purpose of easy, efficient processes. And this is okay for some people while not so much for others.

Interested in our online AI coding program for middle & high school students? Enter your email below for program enrollment, updates & more!

   

About Inspirit AI

AI Scholars Live Online is a 10 session (25-hour) program that exposes high school students to fundamental AI concepts and guides them to build a socially impactful project. Taught by our team of graduate students from Stanford, MIT, and more, students receive a personalized learning experience in small groups with a student-teacher ratio of 5:1.

By Anushka Kolli, Inspirit AI Ambassador

Previous
Previous

A Palette of Expression: The Congressional Art Competition

Next
Next

Environmental Science Research Projects for High Schoolers: How to Get Started