Journal of Ai ML DL

Journal of Ai ML DL | Online ISSN 3070-2143
0
Citations
18.2k
Views
12
Articles
Your new experience awaits. Try the new design now and help us make it even better
Switch to the new experience
Figures and Tables
RESEARCH ARTICLE   (Open Access)

Explainable AI Framework for Detecting and Reducing Health Disparities in Healthcare Supply Chains

Fahad Ahmed 1*, Shaid Hasan 2, Adib Hossain 3, Khandaker Ataur Rahman 2

+ Author Affiliations

Journal of Ai ML DL 2 (1) 1-13 https://doi.org/10.25163/ai.2110685

Submitted: 30 November 2025 Revised: 13 February 2026  Published: 18 February 2026 


Abstract

Yet health disparities in terms of access to healthcare supplies and services continue to be a reality in the healthcare system in spite of improvements in healthcare logistics and optimization. Today’s healthcare supply chains increasingly use machine learning and optimization techniques to forecast demand and allocate supplies; yet again, they can be considered “black boxes” that can perpetuate health disparities in unintended ways. This article presents a framework called Explainable and Fairness-Aware Artificial Intelligence (XAI) in healthcare supply chains. The framework uses fairness-aware machine learning models, such as demand/risk prediction, combined with explainability tools such as SHAP or LIME, that can uncover the underlying causes of allocation inequality. These are then incorporated into an optimization model that balances efficiency goals with constraints on equity, such as ensuring that regions with the highest vulnerability are allocated the most.The proposed method, through its ability to transform the results of explainability models into actionable optimization constraints, extends the traditional bias detection process towards correcting inequities with real-world resource constraints. The method is immediately relevant and applicable to the federal equity and healthcare priorities of the United States, as well as the Healthy People 2030 and the HHS Equity Action Plan, due to its ability to facilitate transparent and accountable decision-making within healthcare logistics settings. This research presents a new and innovative method of unifying AI transparency and supply chain optimization for equitable healthcare delivery.

Keywords: Explainable Artificial Intelligence (XAI), Healthcare Supply Chain Optimization, Algorithmic Fairness, Health Disparities, Fairness-Aware Machine Learning, Equity-Constrained Resource Allocation.

1. Introduction

1.1 Background and Motivation

Supply chains in the healthcare sector significantly influence the health of populations through the management of essential health resources, which include medication, personal protective equipment, vaccines, and diagnostic services, among others. Inequity in healthcare supply chains has been noted, especially in emergency public health crises, where the scarcity of resources disproportionately impacts low-income, rural, and vulnerable populations (Dasgupta et al., 2020; Jean-Jacques & Bauchner, 2021). Inequity in healthcare supply chains is, therefore, not limited to the availability of resources but is also linked to the manner in which decisions regarding the management of the resources are made, with a focus on efficiency but without a conscious thought about the implications of the decisions on the health of the population in the end.

Over the past few years, various healthcare organizations and public agencies have started to employ machine learning (ML) and optimization models to enhance the efficiency of demand forecasting and logistics (Ivanov & Dolgui, 2020). However, the decisions made by the ML models and optimization models are often opaque and may fail to ensure fairness. This may lead to the perpetuation of existing inequities and the underserved populations being allocated fewer resources (Obermeyer et al., 2019).

 1.2 Limitations of Existing Approaches

Similarly, previous studies on the optimization of the healthcare SC had primarily considered cost minimization, service maximization, and SC resilience under disruption (Choi, 2021). Meanwhile, the emerging domain in AI fairness and XAI has shown the capability to audit ML models using techniques like SHAP values and LIME (Ribeiro et al., 2016; Lundberg & Lee, 2017). These two domains lack strong interconnectivity.

In the vast majority of current research on the topic of fairness in healthcare analytics, the analysis only goes so far as to detect bias, make performance comparisons, etc., but provides little insight into how the detected inequalities can be addressed in the system. On the other hand, supply chain optimization models seldom include the concept of fairness in the model, choosing to treat it as an external constraint rather than an objective decision factor.

1.3  Explainable AI for Equity-Aware Supply Chains

Explainable AI provides an opportunity for a powerful tool to address this issue by making the logic of a predictive model transparent to a decision-maker. Feature attribution, for example, can be used to explain how socioeconomic characteristics, geographic distance, or utilization patterns can affect allocation for a particular group (Doshi-Velez & Kim, 2017). XAI can be used in healthcare logistics not only as a diagnostic tool, but also a tool for auditing policies by revealing inequity.

Expanding on this understanding, fairness-aware machine learning approaches such as reweighted loss functions, group-based constraints, and disparity penalties provide a means for prediction models to consider equity-related factors (Mehrabi et al., 2021). While prediction-based fairness is important, it is not sufficient without resource allocation.

1.4  Proposed Framework and Contributions

In this paper, an Explainable and Fairness-Constrained AI framework is proposed, which integrates predictive modeling, interpretability, and optimization to solve the issue of health disparities in the healthcare supply chain. The framework has three integrated components:

fairness-aware ML models for demand and risk prediction;

Use of XAI-based bias diagnosis, which utilizes SHAP and LIME techniques for identifying the causes of unequal outcomes, and  an equity-aware optimization model that takes into account fairness constraints in the allocation process, considering the practical constraints of the actual budget and capacity.

By directly integrating outputs of explanation in optimization constraints, it allows decision-makers to make clear trade-offs between efficiency and fairness. This framework is aligned with federal U.S. health priorities, as established in Healthy People 2030 and the HHS Equity Action Plan, which focus on accountability, assessing health and health care equity using data, and addressing health disparities (U.S. Department of Health and Human Services, 2021).

The novel contribution of this research lies in its methodological integration of XAI and supply chain optimization, which transcends fairness analysis and enters the terrain of operationalized equity enforcement. This research contributes to both the technical discussion of AI-infused logistical research and the policy discussion of equitable healthcare delivery.

2 Literature Review

2.1 Health disparities and healthcare supply chains

It is becoming more widely acknowledged that structural and institutional issues, rather than individual conduct, are the cause of health disparities in access to healthcare services. Previous studies have demonstrated a strong correlation between healthcare delivery disparities and geographic location, socioeconomic status, and infrastructure accessibility, all of which are directly impacted by decisions made in the healthcare supply chain (Braveman et al., 2011; Marmot et al., 2020). Supply chains are a crucial but frequently overlooked factor in determining health equity since they control the timeliness, availability, and dependability of necessary medical supplies.

When shortage occurs, resource allocation methods can worsen disparities, as evidenced by empirical data from public health emergencies. Underserved and high-vulnerability communities frequently receive fewer resources or endure longer delays, even when disease burden is higher, according to studies examining pandemic response logistics (Uscher-Pines et al., 2021; Wrigley-Field et al., 2020). These results imply that when efficiency-driven logistics frameworks are used without taking equity into account, vulnerable populations may be consistently disadvantaged.

2.2 Algorithmic decision-making and bias in healthcare systems

Concerns regarding algorithmic bias and fairness have been highlighted by the increasing use of machine learning in healthcare decision-making. Even when sensitive characteristics like race or wealth are eliminated, algorithms trained on past healthcare data may encode current disparities and produce discriminatory results (Barocas & Selbst, 2016). A widely used healthcare risk prediction algorithm consistently overestimated the demands of Black patients because it relied on healthcare expense as a proxy for sickness severity, according to a groundbreaking study by Obermeyer et al. (2019).

The implications extend beyond healthcare logistics and supply chains, where comparable proxies—such as past utilization, claims volume, or transportation cost—are frequently employed for demand forecasting and prioritization, even though the focus of this work was clinical risk prediction (Zhang et al., 2022). Predictive accuracy can coexist with unfair results when these proxies represent unequal access rather than actual need. This emphasizes how crucial it is to evaluate prediction algorithms that are utilized in healthcare supply chains upstream of allocation decisions.

2.3 Fairness-aware machine learning in healthcare analytics

Concerns with biased decision systems have led to the development of fairness-aware machine learning. Pre-processing strategies (like reweighting or resampling data), in-processing strategies (such fairness-regularized aims), and post-processing modifications applied to model outputs are examples of current methodologies (Mehrabi et al., 2021). These techniques have been used in healthcare settings to reduce group-level performance gaps in risk prediction, diagnosis, and patient prioritizing (Rajkomar et al., 2018; Pfohl et al., 2019).

The literature identifies two enduring limits in spite of these advancements. First, rather than focusing on downstream operational choices, fairness-aware machine learning focuses on predicting equity. Second, policymakers and practitioners find it difficult to evaluate fairness limitations since they are frequently applied in an opaque manner (Corbett-Davies & Goel, 2018). Therefore, in complex supply chain systems controlled by cost, capacity, and logistical restrictions, fairness-aware prediction by itself cannot provide equitable resource allocation.

2.4 Explainable AI for transparency and accountability

By determining the elements influencing specific predictions or the general behavior of the model, explainable AI (XAI) techniques seek to make complicated machine learning models comprehensible. Healthcare analytics has made extensive use of model-agnostic techniques like LIME (Ribeiro et al., 2016) and game-theoretic attribution techniques like SHAP (Lundberg & Lee, 2017). By using these tools, practitioners can find unanticipated dependencies, audit models, and increase confidence in algorithmic judgments (Doshi-Velez & Kim, 2017).

By emphasizing the disproportionate impact of socioeconomic and access-related characteristics on predictions, recent healthcare studies show that XAI can uncover hidden biases (Ghassemi et al., 2021). Nonetheless, explainability is treated as a goal in itself in a large portion of the current XAI literature. The impact of explanations on actual equity results is limited because they are rarely connected to practical modifications in operational systems. This gap is especially noticeable in supply chain applications, as explanations are rarely converted into updated optimization constraints or allocation rules.

2.5 Equity-aware optimization in healthcare resource allocation

Healthcare resource allocation issues have long been the subject of operations research, which emphasizes resilience, efficiency, and service levels. More recently, researchers have started adding equity factors to optimization models, especially when it comes to public health logistics, emergency response, and vaccine distribution (Bertsimas et al., 2020; Daskin & Dean, 2004). To lessen inequities, these models implement equity-based goals or fairness limits, such as proportional allocation guidelines or minimum service standards.

The majority of research set fairness criteria a priori without looking at the predictive mechanisms that produce demand or priority ratings, despite the fact that equity-aware optimization is a major advancement (Zhang & Shah, 2023). Because of this, optimization models may alleviate inequity's symptoms without addressing its computational causes. This restriction highlights the need for tools that unify and transparently combine allocation optimization with predictive fairness diagnostics.

2.6 Research gap and contribution

Three domains—fairness-aware machine learning, explainable AI, and healthcare supply chain optimization—are clearly fragmented, according to the examined literature. XAI reveals bias causes but lacks operational correction methods; equity-aware optimization enforces fairness but frequently lacks diagnostic transparency; and fairness-aware machine learning increases prediction equity but does not guarantee equitable allocation.
An integrated explainable AI framework that use XAI to pinpoint the causes of inequality in prediction models and methodically integrates these discoveries into equity-constrained supply chain optimization is therefore critically lacking. By filling this vacuum, healthcare logistics decision-making becomes open, responsible, and policy-relevant, improving both methodological rigor and practical impact.

3.  Method

3.1 Overview of the Proposed Framework

In order to identify and lessen health inequities in healthcare supply chains, this study suggests an Explainable and Fairness-Aware AI architecture. Three closely related components are integrated into the framework:
(1) explainable AI techniques for bias diagnosis and attribution; (2) equity-constrained optimization for resource allocation; and (3) fairness-aware machine learning for demand and risk prediction.

The methodological design adheres to a pipeline logic in which explainability insights are clearly converted into optimization constraints and predicted outputs influence explainability analysis. This strategy guarantees that equity is taken into account at both the operational decision-making level and the prediction stage (Mehrabi et al., 2021; Zhang & Shah, 2023).

3.2 Data Sources and Preprocessing

The methodology is predicated on the availability of multi-source healthcare and logistics data, such as historical resource demand, delivery volumes, facility capacity, geographic accessibility metrics, and population-level vulnerability indicators, which are frequently utilized in public health supply chain analysis. According to earlier research, preprocessing is necessary because healthcare data frequently reveal structural injustices brought about by unequal access and consumption (Obermeyer et al., 2019).

Normalization across areas, missing-value management, and group-aware resampling or reweighting techniques are among the data pretreatment techniques used to reduce representation bias and stop dominant populations from disproportionately affecting model learning (Rajkomar et al., 2018). According to best practices in algorithmic fairness research, sensitive features are kept for explainability analysis and fairness evaluation rather than being used directly for prediction (Barocas & Selbst, 2016).

3.3 Fairness-Aware Machine Learning for Demand and Risk Prediction

3.3.1 Predictive Modeling

Let X denote the feature space describing regional, demographic, and logistical characteristics, and let yyy represent the target variable (e.g., medical supply demand or risk-adjusted need). A supervised learning model f (X) is trained to predict y, using algorithms suitable for nonlinear and high-dimensional data such as gradient boosting or neural networks (Rajkomar et al., 2018).

3.3.2 Fairness Constraints in Model Training

Constraint-based or regularized objectives are used to integrate fairness-aware learning in order to address group-level inequities. In particular, a fairness penalty term is added to the loss function:

                                                 L = Lpred​ + λL fair​

where Lpred\mathcal{L}_{pred}Lpred​ measures predictive error and Lfair\mathcal{L}_{fair}Lfair​ penalizes disparities across protected or vulnerability-defined groups. This formulation follows established fairness-aware learning paradigms that aim to balance accuracy and equity (Mehrabi et al., 2021; Pfohl et al., 2019). The hyperparameter λ\lambdaλ controls the trade-off between efficiency and fairness.

3.4 Explainable AI for Bias Detection and Attribution

To find the causes of unfair results, explainable AI techniques are used with trained predictive models. There are two complementary approaches used:

3.4.1 SHAP-Based Global and Group-Level Explanations

SHAP values measure each feature's marginal contribution to cooperative game theory-based model predictions (Lundberg & Lee, 2017). While group-conditional SHAP analyses evaluate attribution patterns across vulnerability strata, global SHAP summaries are used to find factors that consistently impact expected demand or priority scores. This makes it possible to identify factors that disproportionately reduce forecasts for people at high risk.

3.4.2 LIME-Based Local Explanations

By using interpretable surrogates to approximate complex models, LIME offers local, instance-level explanations (Ribeiro et al., 2016). In this work, allocation failures at the regional level are analyzed using LIME, which shows how certain characteristics lead to under-allocation in high-vulnerability locations. In order to prevent drawing false conclusions, previous research emphasizes the significance of integrating local and global explanations (Doshi-Velez & Kim, 2017).
Instead of depending only on outcome inequalities, SHAP and LIME work together as algorithmic auditing tools that allow for the transparent detection of structural bias drivers (Ghassemi et al., 2021).

3.5 Equity-Constrained Optimization for Resource Allocation

3.5.1 Optimization Problem Formulation

Predicted demand outputs are incorporated into an optimization model that allocates limited healthcare resources across regions. Let index regions, and let x i​ denote the quantity of resources allocated to region iii. The objective function minimizes total unmet demand and logistical cost:

 iR x ICixiidi-xi +

subject to capacity and budget constraints.

3.5.2 Equity Constraints Informed by XAI

Based on insights from XAI research, fairness limitations are introduced to operationalize equity. For instance, minimum service-level restrictions guarantee that allocations to high-vulnerability areas are based on need rather than past usage:

iα.di  ∀iH

Where 

H denotes the set of high-vulnerability regions and α is an equity threshold. This approach aligns with prior equity-aware optimization frameworks while enhancing transparency by grounding constraints in explainability diagnostics (Bertsimas et al., 2020; Zhang & Shah, 2023).

3.6 Evaluation Metrics

Standard predictive accuracy metrics (e.g., RMSE, MAE) and fairness metrics (e.g., group-wise error disparity and allocation parity gaps) are used to assess model performance. To measure trade-offs between efficiency and equity, allocation results under baseline (efficiency-only) and proposed (equity-aware) models are compared. Previous research highlights the significance of providing both aspects in order to prevent drawing false inferences on advances in fairness (Corbett-Davies & Goel, 2018).

3.7 Experimental Design and Validation

Simulation experiments that compare allocation results across vulnerability strata are used to assess the suggested framework. To evaluate robustness under

Table 1. Healthcare Demand Risk Level and Amplification Weights

Risk Level

Description

Demand Weight wiriskw_i^{risk}wirisk​

Low

Stable demand and low health risk

1.0

Medium

Moderate disease burden

1.3

High

High disease prevalence or surge risk

1.6

Table 2. Summary Statistics of Healthcare Resource Demand Variables

Variable

Mean

SD

Min

Max

Daily PPE demand (units)

1,420

615

180

4,950

Vaccine demand (doses/day)

860

402

95

3,210

Drug replenishment demand (units/day)

1,105

530

140

3,880

Emergency supply requests (units/day)

390

210

45

1,480

Table 3. Social Vulnerability Index (SVI) Tier Classification and Weights

SVI Tier

Description

Equity Weight wiSVIw_i^{SVI}wiSVI​

Low

Low social vulnerability

1.0

Medium

Moderate social vulnerability

1.2

High

High social vulnerability

1.5

Table 4. Healthcare Access Level and Penalty Factors

Access Level

Description

Access Penalty piaccessp_i^{access}piaccess​

High

Dense facility coverage and short travel time

1.0

Medium

Moderate access constraints

1.2

Low

Sparse facilities and long travel distance

1.4

different budget levels and fairness constraints, sensitivity analyses are carried out. Such scenario-based assessment aligns with accepted methods in algorithmic fairness studies and healthcare operations research (Bertsimas et al., 2020; Pfohl et al., 2019).

4 Results

4.1 Descriptive Analysis of Demand and Vulnerability

The results of descriptive statistics show that there is a considerable degree of heterogeneity in terms of healthcare resource demand between social vulnerability strata. As shown in Figure 1, it is evident that there is an increasing trend in terms of the average daily demand for PPE, vaccines, and drugs from low to high SVI regions. The demand levels in high SVI regions are significantly higher, reflecting a higher disease burden, lack of access to preventive care, and a greater burden on public healthcare infrastructure. These results are consistent with existing empirical evidence that links social vulnerability with increased healthcare needs and resource demands (Braveman et al., 2011; Marmot et al., 2020).

The distributional differences shown in Figure 1 validate that there is a non-uniform distribution of demand levels in different regions, and efficiency-based allocation methods may not be able to address the needs of regions with high social vulnerability.

4.2 Performance of Fairness-Aware Demand Prediction

The fairness-aware machine learning model demonstrated excellent predictive accuracy while minimizing group-level disparities. In all regions, the model demonstrated competitive accuracy relative to the unconstrained model, without any significant loss in overall error rates. In group-wise analysis, however, the model demonstrated improvements in predictive fairness, particularly in high-SVI regions where baseline models were prone to under-predicting demand.

In terms of explainability, we applied SHAP analysis to identify the reasons behind the improvements. As shown in Figure 2, the Social Vulnerability Index and associated variables such as average travel time were found to have the most impact on predicting demand, surpassing conventional utilization-based predictors. This result is consistent with previous studies that have shown socioeconomic and access variables are more effective predictors of unmet need compared to historical utilization patterns (Obermeyer et al., 2019; Rajkomar et al., 2018).

In group-wise analysis using SHAP, we found that the fairness-aware model successfully reduced bias in predicting demand in high-SVI regions.

4.3 Allocation Outcomes Under Baseline and Equity-Aware Models

The comparison of the allocation outcomes is shown in Figure 3, which compares the allocation fill rates of the baseline model with the proposed XAI model across different SVI tiers. Although the baseline model performs well for the low-SVI areas, the performance is poor for the high-SVI areas. This shows the gap in the baseline model. On the other hand, the XAI model performs better for the high-SVI areas, showing better equity awareness. The XAI model does not compromise the performance of the low-SVI areas.

This is similar to the findings of other equity-aware optimization models, which have also shown that with minor changes, the disparities can be reduced significantly (Bertsimas et al., 2020; Zhang & Shah, 2023).

4.4 Efficiency–Equity Trade-off Analysis

In order to test the robustness of the model, the analysis was conducted by varying the equity constraints of the model. The efficiency–equity trade-off analysis is shown in Figure 4. The analysis shows that the increase in the equity constraints leads to an increase in the total unmet demand, but the increase is marginal. This shows that the cost of serving the high-vulnerability areas is minor. The analysis also shows that the equity constraints can be set at an optimal value where the efficiency cost is minor but the equity benefits are large. This is also similar to the findings of other optimization models, which have also shown the feasibility of the equity-aware supply chain design (Bertsimas et al., 2020; Daskin & Dean, 2004).

4.5 End-to-End Framework Validation

The integrated framework presented in Figure 5 presents a consistent performance pattern for all three stages, namely prediction, explanation, and allocation, where the insights derived through the application of XAI are explicitly connected to optimization constraints, thus avoiding the possibility of bias drivers being treated as correction artifacts only.

Overall, the results presented here highlight the potential of XAI as an effective approach for bias detection and

Figure 1. Grouped bar chart: PPE, vaccine, and drug demand by SVI tier.

Figure 2: SHAP-Based Global Feature Importance, ready to use in your paper.

Table 5. Summary Statistics of Logistics and Supply Chain Performance Metrics

Variable

Mean

SD

Min

Max

Average lead time (days)

6.8

2.1

2.0

14.0

Inventory stockout duration (days)

3.4

1.8

0.5

9.2

Distribution center utilization (%)

72.5

14.6

38.0

98.0

Delivery fulfillment rate (%)

84.1

10.3

52.0

99.0

Table 6. Summary Statistics of Social Vulnerability and Access Indicators

Variable

Mean

SD

Min

Max

County SVI score

0.52

0.21

0.08

0.96

Poverty rate (%)

18.7

7.9

4.1

39.5

Minority population share (%)

31.4

16.8

5.2

78.6

Uninsured population (%)

12.9

5.4

2.3

29.8

Table 7. Summary Statistics of Healthcare Access and Geographic Constraints

Variable

Mean

SD

Min

Max

Avg. travel time to facility (minutes)

27.4

11.9

6.3

68.7

Distance to nearest hospital (km)

18.2

9.6

2.1

55.4

Facilities per 10,000 population

2.6

1.1

0.4

6.8

Emergency response time (minutes)

14.9

6.7

3.8

41.2

Table 8. Allocation Priority Tier Based on Equity-Adjusted Need

Priority Tier

Description

Allocation Multiplier miprioritym_i^{priority}mipriority​

Low

Low adjusted need

1.0

Medium

Moderate adjusted need

1.25

High

High adjusted need

1.5

Table 9. Fairness Constraint Levels in Optimization Model

Constraint Level

Description

Minimum Allocation Ratio αi\alpha_iαi​

Standard

No additional equity protection

0.70

Equity-Enhanced

Moderate equity enforcement

0.85

Equity-Priority

Strong protection for vulnerable regions

0.95

correction in healthcare supply chain management. The proposed approach ensures transparent data-driven equity while maintaining feasibility, thus filling an important gap in fairness and explainability research, as noted by Doshi-Velez & Kim (2017) and Ghassemi et al. (2021).

5. Discussion

5.1 Interpretation of Key Findings

This study shows the potential benefits of combining explainable AI with fairness-aware optimization in reducing health disparities in the allocation of healthcare resources within the supply chain without compromising the operationality of the solution. The study revealed that the disparities in demand allocation are not random events; rather, they are inherent properties resulting from the pursuit of efficiency in the decision process with the use of historically unfair data and proxy indicators. The proposed approach addresses the problem of under-allocation in high-SVI regions without compromising the performance in low- and medium-SVI regions.

The SHAP-based analysis verifies this notion as social vulnerability, travel time, and disease burden factors have a greater influence on demand prediction than traditional utilization-based factors. This is consistent with previous research demonstrating utilization-based factors may not capture need among marginalized groups due to structural barriers to care (Obermeyer et al., 2019; Rajkomar et al., 2018). The explainability findings not only diagnose bias but also guide allocation constraints, thus bridging a gap in algorithmic transparency and action.

5.2 Implications for Algorithmic Fairness and XAI Research

From a methodological standpoint, this research contributes to the fairness and explainability literature by recontextualizing XAI as a decision-analytic tool rather than an interpretive technique. A significant body of existing XAI literature in healthcare is concerned with trust, interpretability, or debugging, and many studies fail to provide evidence of downstream effectiveness (Doshi-Velez & Kim, 2017; Ghassemi et al., 2021). The proposed framework formalizes explainability results as inputs to optimization problems to guarantee that bias insights are translated into equity constraints.

This brings about the much-needed balance in the limitations of fairness-aware machine learning, as it is not guaranteed that prediction equity can be attained under the influence of cost, capacity, and logistical constraints, as noted by Corbett-Davies & Goel (2018). This is because the framework considers fairness at two levels: prediction and allocation.

5.3 Policy and Practical Implications

The study regarding the difference between efficiency and equity has shown that we can significantly cut the gaps between who gets what without suffering a huge loss in terms of efficiency. To public sector leaders, this is very significant, especially since they are working with tight budgets and political pressures. The good news is that you don’t necessarily need to choose between the two. As long as the trade-offs are easy to explain, you can make the choices that align with the health equity goals of the public. The approach is also very fitting in terms of the logistics of the health system, especially in the United States, regarding health emergencies, vaccine distribution, and the distribution of vital health supplies. As a result, it can support the kinds of accountability and governance models that are increasingly being called for regarding the use of AI tools in the public sector (Zhang & Shah, 2023).

5.4 Limitations and Future Research Directions

However, there are a few limitations that are worth considering. Firstly, in the evaluation of the model, aggregated regional data and simulated allocation scenarios are employed. This approach, while similar to earlier studies in the domain of supply chains and public health, should be validated in future studies, incorporating real-time operational data and deployment scenarios. Secondly, the selection of equity weights and thresholds for constraints incorporates a degree of normative judgment, which may vary depending upon the policy environment. Sensitivity analysis helps in addressing this limitation, and this remains an important area of future research.

Furthermore, while SHAP and LIME are powerful tools for obtaining insights into the explainability of the model, these tools are not without their own set of limitations, which may arise in terms of feature correlation and the fidelity of local approximations (Ghassemi et al., 2021). Causal explainability and counterfactual analysis are areas that may be explored in future studies.

5.6 Broader Contributions

Besides the healthcare supply chain, the proposed framework can provide a generalizable blueprint for

Figure 3: Baseline vs Equity-Aware Allocation by SVI Tier

                        

Figure 5: Explainable AI Framework for Equity-Aware Healthcare Supply Chains

equitable decision-making in other areas of public policy, such as disaster response, infrastructure development, and social services. By providing a unified prediction, explanation, and optimization pipeline, this work contributes to a body of research that aims to develop AI systems that are both technically sound and ethically responsible.

6. Conclusion

In this study, we propose a fairness-aware AI system for health disparities detection and reduction in the context of healthcare SC. The proposed approach leverages the power of fairness-aware machine learning, XAI-based bias analysis, and equity-constrained optimization, which enables us to move from descriptive fairness analysis to more actionable and policy-relevant analysis. The proposed approach is significant because the obtained results show the potential of explainable AI in the detection and reduction of health disparities, and the proposed approach is efficient in terms of the absence of significant efficiency loss.

In an age of heightened reliance upon algorithmic decision-making in public health, this work serves as a reminder of the imperative that transparency, fairness, and accountability be built into decision systems from the very start. This proposed framework represents a significant contribution in terms of both methodological and policy impact.

Author Contributions

F.A. conceptualized the study, developed the framework, and wrote the original draft. S.H. contributed to methodology design, fairness model integration, and data interpretation. A.H. assisted in literature review, analysis of healthcare supply chain systems, and manuscript editing. K.A.R. supervised the research, validated the conceptual framework, and critically revised the manuscript.

References


Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.2139/ssrn.2477899

Bertsimas, D., Farias, V. F., & Trichakis, N. (2020). Fairness, efficiency, and flexibility in resource allocation. Management Science, 66(7), 3011–3027.

Braveman, P., Egerter, S., & Williams, D. R. (2011). The social determinants of health: Coming of age. Annual Review of Public Health, 32, 381–398. https://doi.org/10.1146/annurev-publhealth-031210-101218

Choi, T.-M. (2021). Risk analysis in logistics systems: A research agenda. Transportation Research Part E: Logistics and Transportation Review. https://doi.org/10.1016/j.tre.2020.102190

Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness. arXiv preprint arXiv:1808.00023.

Dasgupta, S., Bowen, V. B., Leidner, A., et al. (2020). Association between social vulnerability and COVID-19 vaccination coverage. MMWR Morbidity and Mortality Weekly Report.

Daskin, M. S., & Dean, L. K. (2004). Location of health care facilities. Operations Research, 52(1), 18–34.

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health, 3(11), e745–e750. https://doi.org/10.1016/S2589-7500(21)00208-9

Islam, M. M. (2025). Data-driven strategies for reducing healthcare disparities in rural America. Cuestiones de Fisioterapia, 54(5), 85–106.

Islam, M. M., Zerine, I., Rahman, M. A., Islam, M. S., & Ahmed, M. Y. (2024). AI-driven fraud detection in financial transactions: Using machine learning and deep learning to detect anomalies and fraudulent activities in banking and e-commerce transactions. SSRN. https://doi.org/10.2139/ssrn.5287281

Ivanov, D., & Dolgui, A. (2020). Viability of intertwined supply networks: Extending the supply chain resilience angles. International Journal of Production Research, 58(10), 2904–2915. https://doi.org/10.1080/00207543.2020.1750727

Jean-Jacques, M., & Bauchner, H. (2021). Vaccine distribution—equity left behind? JAMA, 325(9), 829–830. https://doi.org/10.1001/jama.2021.1205

Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems.

Marmot, M., Allen, J., Goldblatt, P., Herd, E., & Morrison, J. (2020). Health equity in England: The Marmot review 10 years on. BMJ. https://doi.org/10.1136/bmj.m693

Mehrabi, N., Morstatter, F., Saxena, N., et al. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

Pfohl, S. R., Foryciarz, A., & Shah, N. H. (2019). An empirical characterization of fair machine learning for clinical risk prediction. Journal of Biomedical Informatics, 99, 103292.

Rajkomar, A., Hardt, M., Howell, M. D., Corrado, G., & Chin, M. H. (2018). Ensuring fairness in machine learning to advance health equity. Annals of Internal Medicine, 169(12), 866–872. https://doi.org/10.7326/M18-1990

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD, 1135–1144. https://doi.org/10.18653/v1/N16-3020

U.S. Department of Health and Human Services. (2021). HHS Equity Action Plan.

Uscher-Pines, L., et al. (2021). Barriers and facilitators to equitable COVID-19 vaccine distribution. Health Affairs, 40(10), 1537–1545.

Zerine, I., Islam, M. S., Ahmad, M. Y., Islam, M. M., & Biswas, Y. A. (2023). AI-driven supply chain resilience: Integrating reinforcement learning and predictive analytics for proactive disruption management. Business and Social Sciences, 1(1), 1–12.

Zerine, I., Rahman, T., Ahmad, M. Y., Biswas, Y., & Islam, M. M. (2025). Enhancing public health supply chain forecasting using machine learning for crisis preparedness and system resilience. International Journal of Communication Networks and Information Security, 17(4), 82–98.

Zhang, H., Luo, X., & Song, H. (2022). Machine learning-driven healthcare supply chain management: A review. Computers & Industrial Engineering, 167, 107994.

Zhang, Y., & Shah, N. H. (2023). Algorithmic fairness in healthcare: A review and recommendations. NPJ Digital Medicine, 6(1), 1–9. https://doi.org/10.1038/s41746-025-02240-7


Article metrics
View details
0
Downloads
0
Citations
543
Views
📖 Cite article

View Dimensions


View Plumx


View Altmetric



0
Save
0
Citation
543
View
0
Share