Welcome to the Health System Data Science (HSDS) Lab. Headed by Professor Sang Min Park, our primary interests include (1) using data science methodologies to conduct large scale epidemiological studies using public health big data, and (2) finding applications of deep learning and explainable artificial intelligence (XAI) in healthcare.
Using public health big data such as the Korean National Health Insurance System's health claims data, we are interested in creating a big data platform that merges individual-level information with area-level environmental factors. With this platform, we aim to conduct a wide range of studies such as identifying individual- and area-level risk factors associated with major health outcomes, determining the effect of changes in health behaviors on patient prognosis, elucidating susceptible populations to major outcomes, and using healthcare utilization metrics such as patterns of care and continuity of care to predict patient outcomes.
Emerging fusion databases have made way for opportunistic deep learning, where we can predict a unconventional, yet clinical relevant, medical outcomes using medical images with the help of deep learning. Furthermore, going beyond the ability to predict outcomes, we are interested in the real clinical implications of these deep learning-based biomarkers with respect to existing predictive gold-standards. We have used retinal fundus images to predict atherosclerosis and validated its risk-stratification performance over existing stratification algorithms. We are searching for more clinically relevant deep learning-based biomarkers to help clinicians better understand patients at risk.
Understanding deep learning model decisions is important for clinical decision support. However, the black-box nature of deep learning models makes deep learning-based decisions difficult to understand and interpret. We are interested in finding the best ways to make deep learning models understandable—and acceptable— to clinicians, regulating bodies, and patients. For example, going beyond what simple importance heatmaps have to offer, we have developed and validated counterfactual example-based explanations using adversarial examples to explain a model's rationale, or causal inference, for deep learning-based glaucoma decisions [https://doi.org/10.1016/j.ophtha.2020.06.036].
Further information on XAIMED is available below: