Medical Student SUNY Downstate Health Sciences University, New York Brooklyn, NY, US
Introduction: As artificial intelligence (AI) increasingly permeates healthcare, it promises to enhance patient outcomes and operational efficiency. However, the integration of AI also introduces significant risks of perpetuating biases, necessitating careful consideration of fairness in these systems. To mitigate these biases, a comprehensive framework for model development and deployment is essential to ensure equity in healthcare AI applications.
Methods: A systematic approach was proposed, encompassing all stages of the AI lifecycle. This framework includes assessing data representation, validating outcome labels, mitigating feature and transformation bias, evaluating models for accuracy and fairness, and regularly monitoring deployed models for data drift. A focus on fairness metrics, including FPR Parity, FDR Parity, Recall Parity, and FN/GS Parity, was prioritized to align with specific healthcare applications such as screening, diagnosis, and resource allocation.
Results: The proposed framework demonstrates the importance of iterative model refinement based on fairness metrics to reduce biases and improve equity. Models assessed using fairness metrics provided insights into trade-offs between performance and equity, showing potential for creating AI systems that balance overall accuracy with subgroup-specific performance.
Conclusion : The integration of AI in healthcare has the potential to revolutionize patient care, but it must be approached with a commitment to fairness and equity. By adopting a comprehensive framework for bias mitigation and fairness promotion, healthcare organizations and AI developers can create AI systems that enhance care quality while avoiding the perpetuation of disparities. To fully realize AI's potential in healthcare, organizations must engage with patients and stakeholders, understand their needs and concerns, and prioritize fairness as a core value.