Medicina (Lithuania), cilt.61, sa.9, 2025 (SCI-Expanded)
Background and Objectives: This study aims to evaluate the diagnostic potential of routinely available hematological parameters for acute myocardial infarction (AMI) by employing an Explainable Neural Network (ENN) model that combines high predictive accuracy with interpretability. Materials and Methods: A publicly available dataset comprising 981 individuals (477 AMI patients and 504 controls) was analyzed. A broad set of hematological features—including white blood cell subtypes, red cell indices, and platelet-based markers—was used to train an ENN model. Bootstrap resampling was applied to enhance model generalizability. The model’s performance was assessed using standard classification metrics such as accuracy, sensitivity, specificity, F1-score, and Matthews Correlation Coefficient (MCC). SHapley Additive exPlanations (SHAP) were employed to provide both global and individualized insights into feature contributions. Results: The study analyzed hematological and biochemical parameters of 981 individuals. The explainable neural network (ENN) model demonstrated excellent diagnostic performance, achieving an accuracy of 94.1%, balanced accuracy of 94.2%, F1-score of 93.9%, and MCC of 0.883. The AUC was 0.96, confirming strong discriminative ability. SHAP-based explainability analyses highlighted neutrophils (NEU), white blood cells (WBC), RDW-CV, basophils (BA), and lymphocytes (LY) as the most influential predictors. Individual- and class-level SHAP evaluations revealed that inflammatory and erythrocyte-related parameters played decisive roles in AMI classification, while distributional analyses showed narrower parameter ranges in healthy individuals and greater heterogeneity among patients. Conclusions: The findings suggest that cost-effective, non-invasive blood parameters can be effectively utilized within interpretable AI frameworks to enhance AMI diagnosis. The integration of ENN with SHAP provides a dual benefit of diagnostic power and transparent rationale, facilitating clinician trust and real-world applicability. This scalable, explainable model offers a clinically viable decision-support tool aligned with the principles of precision medicine and ethical AI.