Health insurers increasingly use artificial intelligence (AI) algorithms to determine which medical treatments and services will be covered, particularly through prior authorization, where AI evaluates whether a treatment is “medically necessary”. Insurers claim AI enables faster, safer, and more cost-effective decisions, potentially reducing unnecessary care. However, there are significant concerns about transparency, fairness, and patient impact.
A key issue is the opacity of these AI systems—insurers do not disclose how their algorithms function, making it difficult for patients and providers to understand or challenge decisions. When coverage is denied, patients face limited options: appeal (a process rarely pursued due to its complexity and cost), accept an alternative treatment, or pay out-of-pocket, often unaffordable. Evidence suggests that AI-driven denials can delay or block access to necessary care, sometimes with insurers financially benefiting if appeals outlast a patient’s prognosis. In other words, the patient dies during the appeals procedure.
There are also several equity concerns: people with chronic illnesses, minorities, and LGBTQ+ individuals are disproportionately affected by denials, and the use of AI may worsen these disparities. Regulatory bodies and lawmakers are beginning to address these issues, but the lack of knowledge about the algorithm and accountability remains a pressing problem for patient rights and health equity.
You can read a more detailed story here.
This is not a new concern: read a JAMA article from March 2024.
What can you do? – Join in the fight to create universal health care for everyone. Some helpful links:
Health Care for All Washington
Let me know your thoughts in the comments.
Leave a comment