Keynote Speakers of MICAD 2025

Prof. Junzhou Huang
The University of Texas at Arlington, United States
FAIMBE
Jenkins Garrett Professor
Dr. Junzhou Huang is the Jenkins Garrett Professor in the Computer Science and Engineering department at the University of Texas at Arlington. His major research interests include machine learning, computer vision, medical image analysis and bioinformatics. His research has been recognized by several awards including the NSF CAREER Award, Google TensorFlow Model Garden Award, Microsoft Accelerate Foundation Models Research Award, four Best Paper Awards (MICCAI'10, FIMH'11, STMI'12 and MICCAI'15) as well as two Best Paper Nominations (MICCAI'11 and MICCAI'14). His research projects are supported by both federal/state agencies (NSF, NIH, CPRIT) and industry (Google, Amazon, Microsoft, IBM, Samsung, XtaiPi, Nokia and Johnson & Johnson). He enjoys to develop efficient algorithms with nice theoretical guarantees to solve practical problems involved large scale data.
Prof. Jinman Kim
The University of Sydney, Australia
Director of the Biomedical Data Analysis and Visualisation (BDAV) Lab
Research Director of the Telehealth and Technology Centre, Nepean Hospital
Professor Jinman Kim is a Professor of computer science at the University of Sydney. He established and leads the Biomedical Data Analysis and Visualisation (BDAV) Lab at the School of Computer Science. He is the Imaging Theme Leader of the ARC Training Centre in Innovative Biomedical Engineering on musculoskeletal technologies. He is the also the Director of the Telehealth and Technology Center, Nepean hospital. Prof Kim co-leads the ‘digital health imaging’, as part of the Faculty of Engineering’s Digital Science Initiative, with the vision and strategy to improve the use and accessibility of medical imaging via AI innovations.
Prof Kim received his PhD degrees in computer science from the University of Sydney in 2006. He received the APD Research Fellowship from the ARC in 2008. In 2010, he joined the University of Geneva as an Experienced Researcher (Marie Curie). He then worked as a Senior Lecturer (2013), A/Prof (2016), and Prof (2022) at the University of Sydney.
Speech Title: Multi-modal AI for Biomedical Image Analysis and Visualisation
Abstract: Biomedical imaging plays a pivotal role in patient management in modern healthcare, with most patients who are treated in hospitals undergoing imaging procedures. These technologies can visualise anatomy and function in virtually every organ system in the body in intricate detail. There are numerous imaging modalities; they vary in complexity and sophistication, from plain digital chest X-rays to simultaneous functional and anatomical imaging with positron emission tomography (PET) and computed tomography (CT) imaging (PET-CT), histopathology and cellular imaging. The opportunity now is how to maximize the extraction of meaningful information from the images and present meaningful information to the users. There needs to be strategies to harness knowledge from vast image datasets and complementary sources like image sequences, text reports, and genomics. Fortunately, the era of artificial intelligence (AI) is fuelling the growth of smart decision support and analysis tools for medical image analysis. Despite rapid advancements in integrating AI algorithms into clinical decision support systems, we are still in the nascent stages of the AI revolution in medical imaging. This talk will present our research on cross-modal learning to integrate imaging and complementary data for disease modelling, analysis and visualization, aimed at improving the understanding, in an intuitive way.
Prof. Gang Li
University of North Carolina at Chapel Hill, United States
Principal Investigator
UNC BRAIN Lab
Dr. Gang Li is a Full Professor in the Department of Radiology at the University of North Carolina at Chapel Hill. He received his PhD in 2010 from Northwestern Polytechnical University. He was a Research Fellow at The Methodist Hospital Research Institute, Weill Medical College of Cornell University and Harvard Medical School. He is the recipient of the UNC Junior Faculty Development Award, NIH K01 Career Award, and the Distinguished Investigator Award of the Academy for Radiology & Biomedical Imaging Research.
Prof. Yiyu Shi
University of Notre Dame, United States
Dr. Yiyu Shi is currently a professor in the Department of Computer Science and Engineering at the University of Notre Dame, the site director of National Science Foundation I/UCRC Alternative and Sustainable Intelligent Computing, and the director of the Sustainable Computing Lab (SCL). He is also a visiting scientist at Boston Children’s Hospital, the primary pediatric program of Harvard Medical School. He received his B.S. in Electronic Engineering from Tsinghua University, Beijing, China in 2005, the M.S and Ph.D. degree in Electrical Engineering from the University of California, Los Angeles in 2007 and 2009 respectively. His current research interests focus on hardware intelligence and biomedical applications. In recognition of his research, more than a dozen of his papers have been nominated for or awarded as the best paper in top journals and conferences, including the 2021 IEEE Transactions on Computer-Aided Design Donald O Pederson Best Paper Award. He is also the recipient of Facebook Research Award, IBM Invention Achievement Award, Japan Society for the Promotion of Science (JSPS) Faculty Invitation Fellowship, Humboldt Research Fellowship, IEEE St. Louis Section Outstanding Educator Award, Academy of Science (St. Louis) Innovation Award, Missouri S&T Faculty Excellence Award, NSF CAREER Award, IEEE Region 5 Outstanding Individual Achievement Award, the Air Force Summer Faculty Fellowship, and IEEE Computer Society Mid-Career Research Achievement Award. He has served on the technical program committee of many international conferences. He is the deputy editor-in-chief of IEEE VLSI CAS Newsletter, and an associate editor of various IEEE and ACM journals. He is an IEEE CEDA distinguished lecturer and an ACM distinguished speaker.
Speech Title: Can Quantum Computers Help Medical Image Computing?
Abstract: Medical image computing is rapidly advancing with the help of machine learning, yet the field still struggles with limited and imbalanced datasets, along with strict privacy constraints. These challenges make it difficult to train reliable models and highlight the importance of new approaches to data generation and augmentation. Quantum computing, though still in its early stages, offers intriguing opportunities here. In this keynote, I will explore how quantum methods can enhance generative models for medical imaging, and share recent work on hybrid classical–quantum approaches that produce high-quality images from scarce data. I will also discuss what we learned from testing these ideas on real quantum hardware, and what this means for the future of medical image computing.
Prof. Greg Slabaugh
Queen Mary University of London, UK
Director of DERI
Dr. Greg Slabaugh is Professor and Director of the Digital Environment Research Institute at Queen Mary University of London and an expert in computer vision and artificial intelligence. His research includes deep learning, computational photography, and medical image computing. Prior to joining Queen Mary, he was Chief Scientist in Computer Vision (EU) for Huawei and has held other positions in industry and Siemens and Medicsight. His work on multimodal AI and medical image analysis aligns with the workshop's focus, offering valuable insights. Dr. Slabaugh frequently serves on program committees for leading computer vision and machine learning conferences such as CVPR, NeurIPS, and AAAI.
Speech Title: From Multimodal Fusion to Foundation Models and Digital Twins in Medical AI
Abstract: Medical AI is moving beyond narrow, single-task systems toward models that can integrate diverse data sources and provide clinically meaningful insights. In this talk, I will outline recent advances in multimodal fusion, foundation architectures, and ultrasound modeling, and show how these innovations are converging toward the vision of digital twins. By linking methodological progress with applications in pathology, musculoskeletal conditions, and cardiology, I will highlight how scalable and interpretable AI can better support precision medicine and patient-specific decision making.
Prof. Tom Vercauteren
King's College London, UK
Tom Vercauteren is Professor of Interventional Image Computing at King’s College London since 2018. He is also co-founder and Chief Scientific Officer of Hypervision Surgical, a spin-out company focusing on intraoperative hyperspectral imaging. From 2018-2023, he held the Medtronic / Royal Academy of Engineering Research Chair in Machine Learning for Computer-assisted Neurosurgery at King’s. From 2014 to 2018, he was Associate Professor at UCL where he acted as Deputy Director for the Wellcome / EPSRC Centre for Interventional and Surgical Sciences (2017-18).
From 2004 to 2014, he worked for Mauna Kea Technologies, Paris where he led the research and development team designing image computing solutions for the company’s CE-marked and FDA-cleared optical biopsy device. His work is now used in hundreds of hospitals worldwide. He is a Columbia University and Ecole Polytechnique graduate and obtained his PhD from Inria in 2008. Tom is also an established open-source software supporter.
Tom Vercauteren's research focuses on translational medical image computing, machine learning and interventional imaging devices with a specific interest in their development for surgery and interventional sciences.