Blog

Perfection isn’t everything: The ethical implications of AI in insurance

“To err is human, to forgive, divine.”

This famous quote by Alexander Pope captures the essence of human fallibility—a quality that, for centuries, has influenced our decisions, actions, and the systems we create.

Yet, in the age of Artificial Intelligence (AI), where machines take on roles once reserved for human judgment, the notion of error takes on new dimensions. AI, with its promise of precision and efficiency, is not immune to mistakes, nor is it free from the biases and ethical dilemmas that arise from its very design.

These days, artificial intelligence finds use in many fields, including insurance. From redefining underwriting practices to streamlining claims processing and enhancing policy administration, insurance companies have utilized – and will continue to utilize – AI in their operations.

Biases

Humans are inherently flawed but that’s actually a good thing. AIs on the other hand are designed to be perfect. Perfect isn’t necessarily good.

One instance is when AI doesn’t consider all relevant information when making a decision. AI systems are usually trained on vast sums of historical data. While it wholly makes use of this, this can lead to overgeneralization and the perpetuation of existing biases.

Consider the example of an Applicant Tracking System (ATS) powered by Artificial Intelligence designed to screen job applicants. The AI, trained on historical data, might perfectly match candidates to past successful hires, leading to a biased selection process. If the historical data reflects past biases, like favoring certain demographics, the AI could end up perpetuating those biases without considering a candidate’s potential to break new ground or bring fresh perspectives.

Here, the AI’s “perfect” decision-making is flawed because it lacks the human ability to see beyond data and recognize the value of diversity and innovation.

Privacy and consent

Data privacy is also another issue that sees lawmakers at odds with several tech firms.

Cerebral, a telehealth Insurtech was fined over $7 million over reports that it revealed user’s personal information to third parties while recently T-Mobile was fined over $60 million for a data breach that exposed sensitive customer information during its merger with Sprint back in 2020.

When Artificial Intelligence systems are trained on or use improperly secured data, it not only breaches privacy but also erodes customer trust in the provider. These concerns are further compounded by not knowing how AI systems are trained or where their data comes from.

Companies often provide vague statements, such as “general public information” or, even worse, “from the internet,” without clearly disclosing how the data is collected or how it will be used.

Lack of accountability

Determining who is accountable when Artificial Intelligence systems make errors or cause harm is a challenging ethical issue.

When an AI-driven decision in insurance leads to an adverse outcome – such as the denial of a legitimate claim or the inappropriate cancellation of a policy – who should be held responsible? Is it the insurer, the AI developer, or the data provider?

A similar conundrum was observed during the CrowdStrike Saga last month.

Imagine an AI system that incorrectly flags a life insurance policyholder as deceased, leading to the wrongful termination of their policy. In such cases, the affected individual might face significant difficulties in restoring their coverage. The question of accountability becomes murky—should the insurance company take full responsibility, or should the blame be shared with the AI developer who provided the flawed algorithm?

Impact on employment

The integration of AI into insurance operations is also reshaping the job market, albeit negatively.

Roles traditionally held by underwriters, claims adjusters, and customer service representatives are progressively being automated, leading to job displacement.

Where do we draw the line between maximizing profits and preserving the livelihoods of those whose jobs are at risk?

Back in October last year, GEICO laid off 2000 of its employees, which accounted for a 6% reduction in its workforce.

“This would allow the company to become more dynamic, agile, and streamline its processes while still serving its customers,” the company memo from CEO Todd Combs stated.

Bottom line

As Artificial Intelligence becomes more entrenched in the insurance industry, the ethical issues surrounding its use demand careful consideration.

Today, AI’s precision and efficiency are very much celebrated. However, it’s crucial to remember that perfection, while desirable, is not always the goal. The flaws present in human judgment – while often seen as imperfections – contribute to creativity, empathy, and the nuanced decision-making that machines struggle to replicate.

The quest for flawless technology must be balanced with the recognition that human fallibility brings valuable perspectives and innovation. The future of insurance, and many industries, will hinge not only on the capabilities of Artificial Intelligence but also on our ability to address these ethical challenges with integrity and foresight.

No Comments

Leave a Comment