Real-Life Ethical Issues of Artificial Intelligence

Introduction to Ethical Issues of AI

Many technical giants and researchers are focusing on the ethics of technology and AI. But why are they focusing on them? Is it required? Do machines destroy humanity if they do not have AI Ethics?

To discuss these questions and cross-check their real-life consequences today, we will discuss some real-life cases of Ethical issues. We will discuss industry-wise so that respective stakeholders can relate it and have the required ethics while implementing technology or AI.

 To overcome the challenges, fears, and Ethical risks Artificial Intelligence Ethics is important.

AI Ethics in Healthcare Industry

Artificial Intelligence improves the healthcare processes such as Medical imaging and diagnosis assistance, prognosis and recommends treatments for human wellbeing. But these applications raise some serious ethical issues such as bias and data security. Due to Bias, algorithms start giving unfair results. Let’s discuss a case for better understanding.

Case Study for Healthcare Industry

The U.S. health providers using Artificial Intelligence algorithms to guide health decisions such as which patients require extra care or medical privileges. Researchers at UC Berkeley, Obermeyer et al. identify signs of racial bias in algorithms. It is noticed that the algorithm is assigning the same level of risk to Black patients yet sicker than white patients. White patients were given higher risk scores, thus more likely to be selected for extra care. 

Bias tends to reduce the more than half the number of black patients identified for extra care compared to white.

The primary reason for this is that the algorithm uses health cost rather than illness for health needs. Less money is spent on Black patients than White, yet they have the same level of need. Thus the algorithm considers it falsely that black patients are healthier than white patients for the same disease. 

AI Ethics in Banking and Finance Industry

AI makes the banks digital by automating and also making them more secure. It can be used in fraudulent transactions, anti-money laundering, digital payment advisors, and fraud detection. These all boost productivity, profits, revenue and reduce the cost. But there are several concerns cited by regulators, customers, and experts. These challenges can be grouped into the following categories:

  • Bias
  • Accountability
  • Transparency

We will discuss a case study that may help understand these ethical issues and how they affect a particular community or group.

Case Study of Banking and Finance

“Apple card,” the AI system of Apple, is biased against gender. It was offering significantly different interest rates and credit limits to different genders. It is giving large credit limits to men as compared to women. 

David Heinemeier Hansson (Tech entrepreneur and founder of Ruby on Rails) claims that his wife received a credit limit lower than his, yet has a more favorable credit score.

With traditional “black box” AI systems, it would be challenging for a bank to analyze and understand where this bias originated.

AI in Banking is a joint process powered by chatbots and other automation technology, and for giving life to these techniques, ML plays a vital role. 

Ethics of AI in Hiring Process

Hiring is a very time-consuming and process of headache. AI systems can build an effective automated hiring system that organizations can use to select the best candidates based on their capabilities. Many organizations are leaning towards algorithms in video interviews that rank candidates according to different features. 

But, in some recent applications, some ethical issues are founded that break the rules of equality; it is noticed that the hiring process is biased and unfair. Unconscious racism, ageism, and sexism play a big role while hiring. It is a must to recognize and reduce these biases.

Case Study for Hiring Process

Amazon attempted to leverage the HR teams using their AI recruiting and hiring tool. It allows organizations to take thousands of resumes and then select the top 5. Every organization wants this system.

But, In 2015, it was noticed that the system has a bias against women for rating candidates for software development and other technical positions. 

The reason for this bias is the data that is used to train the system. It was trained on the last ten years’ applications, thus observing the resume submitted pattern. Most of those resumes are men; therefore, there was a reflection of male dominance in the tech industry.

AI Ethics in COMPAS

COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is an AI tool that is used in many jurisdictions around the U.S. It predicts the recidivism risk, the risk of a criminal likely to be reoffended. It provides a score from 1(lowest risk) to 10 (highest risk). It divides them into three categories: high risk of recidivism, medium risk of recidivism, or low risk of recidivism. It takes 137 parameters as input, such as age, gender, criminal history, etc. 

Defendants classified into high or medium risk categories (5-10) have more chances to be held in prison while awaiting trial than those having low risk(1-4). 

According to ProPublica, an investigative journal, the system has a bias; it discriminates against people based on race. It fails in the case of black defendants. 

Black offenders were almost twice as likely as white offenders to be rated a higher risk but not re-offend. On the other hand, it shows opposite results for white offenders: they were classified as lower risk category more likely than black offenders, despite their criminal histories indicating higher chances of reoffending.

Yet, Race is not an explicit feature considered by the model.


As discussed in the case studies, it is mandatory to check the system’s fairness; otherwise, it may continue raising the social inequalities that must be vanished. Required actions and regulations are mandatory to overcome these ethical challenges. One of the significant components of Ethical AI that helps to have ethics in AI systems is Explainable AI. 

Explainable AI achieves transparency in AI systems, such as a black-box deep learning algorithm explained by Explainable AI that transfers it to a white-box model. A wrong decision can have disastrous effects; therefore, the adoption of Deep learning is slow because organizations often cannot confidently reproduce how AI systems make decisions.

Ethical issues available in the system can only be recognized if data and algorithms are fully understood. Therefore to understand the data and algorithm Explainable AI is a core component of Ethical AI.