Yi Qin (Eason)
I'm now currently a Ph.D. candidate supervised by
Prof. Xiaomeng Li
at ECE, the Hong Kong University of Science and
Technology. I also had the opportunity to work with
Prof. Hao Wang and
Prof. Lu Mi. I am also collaborating with Guangdong Cardiovascular
Institute and Prince of Wales Hospital on Echocardiography
AI. Before joining HKUST, I obtained BEng at the South
China University of Technology majoring in Automation
Science and Engineering.
Email
/ CV /
Scholar
/
Twitter
/
Linkedin
|
|
Research
My research interests lie at the intersection of machine
intelligence and digital healthcare, with a particular
interest on echocardiography and cardiology. Specific
interested topics include:
- Diffusion-based Generative Model
-
Trustworthy/Explainable ML (Energy-based Concept-based
models)
-
Foundation Model (FM) based Disease Diagnosis/Prognosis
Prediction
|
"*" indicates equal contribution, "_" indicates equal advising.
|
Multi-Agent Collaboration for Integrating
Echocardiography Expertise in Multi-Modal Large Language
Models
Yi Qin, Dinusara Sasindu Gamage
Nanayakkara, Xiaomeng Li
MICCAI, 2025
Paper
We propose Multi-Agent Collaborative Expertise Extractor,
a multi-agent system that builds EchoCardiography
Expertise Database, the richest cardiac knowledge base
from diverse sources. We also introduce Echocardiography
Expertise-enhanced Visual Instruction Tuning, a
lightweight tuning method that efficiently injects this
expertise into models by training less than 1% of
parameters.
|
|
EchoViewCLIP: Advancing Video Quality Control through
High-performance View Recognition of
Echocardiography
Shanshan Song, Yi Qin, Honglong Yang,
Taoran Huang, Hongwen Fei, Xiaomeng Li
MICCAI, 2025
Paper
EchoViewCLIP addresses these issues using a large dataset
with 38 standard views and OOD samples. It introduces a
Temporal-informed Multi-Instance Learning (TML) module for
capturing key frames and a Negation Semantic-Enhanced
(NSE) detector for OOD rejection. A quality assessment
branch boosts reliability. The model achieves 96.1%
accuracy, advancing fine-grained view recognition and
robust OOD handling in echocardiography.
|
|
Multi-Modal Explainable Medical AI Assistant for
Trustworthy Human-AI Collaboration
Honglong Yang, Shanshan Song, Yi Qin, Lehan
Wang, Haonan Wang, Xinpeng Ding, Qixiang Zhang, Bodong Du,
Xiaomeng Li
Arxiv
Paper
We introduce XMedGPT, a multi-modal medical AI assistant
that enhances clinical usability by combining accurate
diagnostics with visual-text explainability and
uncertainty quantification, enabling transparent and
trustworthy decision-making.
|
|
Reinforced Correlation Between Vision and Language for
Precise Medical AI Assistant
Haonan Wang, Jiaji Mao, Lehan Wang, Qixiang Zhang, Marawan
Elbatel, Yi Qin, Huijun Hu, Baoxun Li,
Wenhui Deng, Weifeng Qin, Hongrui Li, Jialin Liang, Jun
Shen, Xiaomeng Li
Arxiv, 2025
Paper
We introduce RCMed, a full-stack medical AI assistant that
enhances multimodal accuracy through hierarchical
vision-language grounding and a self-reinforcing
correlation loop. Trained on 20M samples, it excels in 165
clinical tasks across 9 modalities, achieving
state-of-the-art performance and strong generalization in
real-world cancer diagnosis and cell segmentation.
|
|
Energy-Based Conceptual Diffusion Model
Yi Qin, Xinyue Xu, Hao Wang,
Xiaomeng Li
Neurips Safe Generative AI Workshop, 2024
Paper
We propose Energy-Based Conceptual Diffusion Models
(ECDMs), a framework that unifies the concept-based
generation, conditional interpretation, concept debugging,
intervention, and imputation under the joint energy-based
formulation.
|
|
Concept-Based Unsupervised Domain Adaptation
Xinyue Xu, Yueying Hu, Hui Tang, Yi Qin, Lu
Mi, Hao Wang,
Xiaomeng Li
ICML, 2025
Paper
CUDA improves the robustness of Concept Bottleneck Models
under domain shifts by aligning concept representations
with adversarial training, allowing flexible differences,
and enabling concept inference without labels. It boosts
interpretability and outperforms state-of-the-art CBM and
domain adaptation methods.
|
-
'The Blade Wall' - Interactive Computer Vision Art
Installation. Cooperated with DJI. Installed in DJI |
Hasselblad Mixed Flagship Store, Nanjing.
-
CN Patent [CN202211153269.4]
一种基于三维智能检测的智能调度方法 [实质审查]
-
CN Patent [CN202211156132.4]
一种基于体感的多电机阵列上位机控制系统 [实质审查]
-
CN Patent [CN202211153265.6]
一种基于体感的多电机阵列嵌入式底层驱动系统 [实质审查]
-
CN Patent [ZL202210346880.2]
一种基于Transformer的物流包裹分离方法 [公开]
Honors, Awards, and Services
|
- ECE Best TA Award (2024/25) (5~7 Annually)
-
HKUST RedBird Academic Excellence Award for Continuing PhD
Students (2024-25)
- HKUST RedBird PhD Recruitment Award (2023-24)
-
Honoured Thesis - Diffusion Model-Empowered Unsupervised
Medical Image Registration
-
First Prize, School of Automation Science and Engineering
Scholarship (2021)
-
Metric Ranking #1/387, Overall Ranking #2,
Kenya Clinical Reasoning Challenge, 2025
- Silver, Guangdong BME Innovative Competition, 2022
-
Bronze & Best Strategy Award, China University Robot
Competition RoboMaster Competition, 2021
-
Third Prize, ICRA & RoboMaster 2021 AI Challenge, 2021
- Reviewer for ICONIP2023, IEEE TNNLS, IEEE TPAMI
- Challenge Organizer: TriALS@MICCAI 2025
-
TA: ELEC 4840 Artificial Intelligence for Medical Image
Analysis (23/24 Spring, 24/25 Spring [Departmental Best
TA])
-
TA: ELEC 3300 Introduction to Embedded Systems (24/25
Fall)
-
Technical advisory board member: Guangdong QiLi Tech. Co.
Ltd.
- TED Talk Translator
This website is adapted from Jonathan Barron's
source code.
|
|