Morals and Artificial Intelligence in Health Insurance: High-tech Discrimination Arising from Predictive Analytics

Abstract

During the past decade the use of artificial intelligence (AI) in health insurance industry has been on the rise. This phenomenon has been spurred on by advancements in computational technologies. The use of AI in health insurance is believed to bring potentially large savings to private health insurers as the costs of claims management reduce. Predictive analysis in health insurance is the process by which models that are built from health databases of the population are used to assign risk scores to individuals. These risk profiles help administer preventive care to individuals who are at risk of certain types of illnesses. Although these technologies augment the potential to monetize big data in health care, there are growing concerns about the morals of a machine-optimized market. In this paper, I present moral and policy considerations of using AI for predictive-risk, score-based health insurance premiums and coverage. The following two questions concern the subject matter of this paper. (a) Is it morally justified to release vast health data to private companies in order to build reliable models despite numerous cases of data breaches? and, (b) What are the moral problems associated with predictive analysis model based assignment of health risk score? Basing on the analysis that emerged, I advance the following two policy recommendations: Comparative study of models that produces unbiased study on models for the good of the society and big data nudging with the help of AI models to reduce model hazards arising from frivolous misuse of health insurance.

Presenters

Dhanya Gopal

Details

Presentation Type

Paper Presentation in a Themed Session

Theme

2019 Special Focus - The World 4.0: Convergence of Knowledges and Machines

KEYWORDS

Ethics, Predictive Analysis, Health Insurance

Digital Media

This presenter hasn’t added media.
Request media and follow this presentation.