专注与传承: iMED学生在MICCAI主会呈现14篇眼科相关论文

        iMED团队的学生传承了“团结,专注与坚持"的团队文化,在iMED各位导师的指导下,于2022MICCAI主会呈现了14篇眼科图像处理及相关医疗影像处理论文。其中来自iMED中国团队12篇,iMED新加坡团队2篇。论文涵盖了眼前节,眼科手术导航,眼脑联动,跨媒体医疗影像分析的iMED4个核心方向。这次MICCAI在新加坡举行,对于2007年发源于新加坡的一直专注眼科图像分析的iMED团队有特别的感情上的意义,我们衷心祝愿大会圆满成功。

 

[PaperID: 50] Structure-consistent Restoration Network for Cataract Fundus Image Enhancement

Li, Heng; Virtual; Image Segmentation, Registration & Reconstruction II; Poster 6

 

Fundus photography is a routine examination in clinics to diagnose and monitor ocular diseases. However, for cataract patients, the fundus image always suffers quality degradation caused by the clouding lens. The degradation prevents reliable diagnosis by ophthalmologists or computer-aided systems. To improve the certainty in clinical diagnosis, restoration algorithms have been proposed to enhance the quality of fundus images. Unfortunately, challenges remain in the deployment of these algorithms, such as collecting sufficient training data and preserving retinal structures. In this paper, to circumvent the strict deployment requirement, a structure-consistent restoration network (SCR-Net) for cataract fundus images is developed from synthesized data that shares an identical structure. A synthesized cataract set (SCS) is first simulated to collect cataract fundus images sharing identical structures. Then high-frequency components (HFCs) are extracted from the SCS to constrain structure consistency such that the structure preservation in SCR-Net is enforced. The experiments demonstrate the effectiveness of SCR-Net in the comparison with state-of-the-art methods and the follow-up clinical applications.

img1

 

[PaperID: 60] Siamese Encoder-based Spatial-Temporal Mixer for Growth Trend Prediction of Lung Nodules on CT Scans

Fang, Jiansheng VirtualComputer Aided Diagnosis I Poster 1

 

In the management of lung nodules, we are desirable to predict nodule evolution in terms of its diameter variation on Computed Tomography (CT) scans and then provide a follow-up recommendation according to the predicted result of the growing trend of the nodule. In order to improve the performance of growth trend prediction for lung nodules, it is vital to compare the changes of the same nodule in consecutive CT scans. Motivated by this, we screened out 4,666 subjects with more than two consecutive CT scans from the National Lung Screening Trial (NLST) dataset to organize a temporal dataset called NLSTt. In specific, we first detect and pair regions of interest (ROIs) covering the same nodule based on registered CT scans. After that, we predict the texture category and diameter size of the nodules through models. Last, we annotate the evolution class of each nodule according to its changes in diameter. Based on the built NLSTt dataset, we propose a siamese encoder to simultaneously exploit the discriminative features of 3D ROIs detected from consecutive CT scans. Then we novelly design a spatial-temporal mixer (STM) to leverage the interval changes of the same nodule in sequential 3D ROIs and capture spatial dependencies of nodule regions and the current 3D ROI. According to the clinical diagnosis routine, we employ hierarchical loss to pay more attention to growing nodules. The extensive experiments on our organized dataset demonstrate the advantage of our proposed method. We also conduct experiments on an in-house dataset to evaluate the clinical utility of our method by comparing it against skilled clinicians.

C:\Users\zkti\AppData\Local\Temp\WeChat Files\31223ade200d0f59f917c427bf96760.png

 

[PaperID: 130] Weighted Concordance Index Loss-based Multimodal Survival Modeling for Radiation Encephalopathy Assessment in Nasopharyngeal Carcinoma Radiotherapy

Fang, Jiansheng VirtualComputer Aided Diagnosis II

 

Radiation encephalopathy (REP) is the most common complication for nasopharyngeal carcinoma (NPC) radiotherapy. It is highly desirable to assist clinicians in optimizing the NPC radiotherapy regimen to reduce radiotherapy-induced temporal lobe injury (RTLI) according to the probability of REP onset. To the best of our knowledge, it is the first exploration of predicting radiotherapy-induced REP by jointly exploiting image and non-image data in NPC radiotherapy regimen. We cast REP prediction as a survival analysis task and evaluate the predictive accuracy in terms of the concordance index (CI). We design a deep multimodal survival network (MSN) with two feature extractors to learn discriminative features from multimodal data. One feature extractor imposes feature selection on non-image data, and the other learns visual features from images. Because the priorly balanced CI (BCI) loss function directly maximizing the CI is sensitive to uneven sampling per batch. Hence, we propose a novel weighted CI (WCI) loss function to leverage all REP samples effectively by assigning their different weights with a dual average operation. We further introduce a temperature hyper-parameter for our WCI to sharpen the risk difference of sample pairs to help model convergence. We extensively evaluate our WCI on a private dataset to demonstrate its favourability against its counterparts. The experimental results also show multimodal data of NPC radiotherapy can bring more gains for REP risk prediction.

C:\Users\zkti\AppData\Local\Temp\WeChat Files\b9a89600e15cc9769cb607d4ace43a8.png

 

[PaperID: 151] Degradation-invariant Enhancement of Fundus Images via Pyramid Constraint Network

Liu, Haofeng; Virtual; Image Segmentation, Registration & Reconstruction I; Poster 4

 

As an economical and efficient fundus imaging modality, retinal fundus images have been widely adopted in clinical fundus examination. Unfortunately, fundus images often suffer from quality degradation caused by imaging interferences, leading to misdiagnosis. Despite impressive enhancement performances that state-of-the-art methods have achieved, challenges remain in clinical scenarios. For boosting the clinical deployment of fundus image enhancement, this paper proposes the pyramid constraint to develop a degradation-invariant enhancement network (PCE-Net), which mitigates the demand for clinical data and stably enhances unknown data. Firstly, high-quality images are randomly degraded to form sequences of low-quality ones sharing the same content (SeqLCs). Then individual low-quality images are decomposed to Laplacian pyramid features (LPF) as the multi-level input for the enhancement. Subsequently, a feature pyramid constraint (FPC) for the sequence is introduced to enforce the PCE-Net to learn a degradation-invariant model. Extensive experiments have been conducted under the evaluation metrics of enhancement and segmentation. The effectiveness of the PCE-Net was demonstrated in comparison with state-of-the-art methods and the ablation study.

img4

 

[Paper ID:429] Delving into Local Features for Open-Set Domain Adaptation in Fundus Image Analysis

Yi Zhou, Shaochen Bai, Tao Zhou, Yu Zhang, and Huazhu Fu POSTER MV-2-PC12

 

Unsupervised domain adaptation (UDA) has received significant attention in medical image analysis when labels are only available for the source domain data but not for the target domain. Previous UDA methods mainly focused on the closed-set scenario, assuming that only the domain distribution shifts across domains while the label space is the same. However, in practice of medical imaging, the disease categories of training data in source domain are usually limited, and the open-world target domain data may have many unknown classes private to the source domain. Thus, open-set domain adaptation (OSDA) has great potential in this area. In this paper, we explore the OSDA problem by delving into local features for fundus disease recognition. We propose a collaborative regional clustering and alignment method to identify the common local feature patterns which are category-agnostic. Then, a cluster-aware contrastive adaptation loss is introduced to adapt the distributions based on the common local features. We also construct the first fundus image benchmark for OSDA to evaluate our methods and carry out extensive experiments for comparison. It shows that our model achieves consistent improvements over the state-of-the-art methods.

img5

 

[PaperID939] Instrument-tissue Interaction Quintuple Detection in Surgery Videos

Lin Wenjun; In person; Computer-Assisted Interventions; Poster 3

 

Instrument-tissue interaction detection in surgical videos is a fundamental problem for surgical scene understanding which is of great significance to computer-assisted surgery. However, few works focus on this fine-grained surgical activity representation. In this paper, we propose to represent instrument-tissue interaction as <instrument bounding box, tissue bounding box, instrument class, tissue class, action class> quintuples. We present a novel quintuple detection network (QDNet) for the instrument-tissue interaction quintuple detection task in cataract surgery videos. Specifically, a spatiotemporal attention layer (STAL) is proposed to aggregate spatial and temporal information of the regions of interest between adjacent frames. We also propose a graph-based quintuple prediction layer (GQPL) to reason the relationship between instruments and tissues. Our method achieves an mAP of 42.24% on a cataract surgery video dataset, significantly outperforming other methods.

 

img6

 

[PaperID943] Interaction-Oriented Feature Decomposition for Medical Image Lesion Detection

Junyong Shen; VirtualMachine Learning Algorithms and Applications; Poster 8 

 

Common lesion detection networks typically use lesion features for classification and localization. However, many lesions are classified only by lesion features without considering the relation with global context features, which raises the misclassification problem. In this paper, we propose an Interaction-Oriented Feature Decomposition (IOFD) network to improve the detection performance on context-dependent lesions. Specifically, we decompose features output from a backbone into global context features and lesion features that are optimized independently. Then, we design two novel modules to improve the lesion classification accuracy. A Global Context Embedding (GCE) module is designed to extract global context features. A Global Context Cross Attention (GCCA) module without additional parameters is designed to model the interaction between global context features and lesion features. Besides, considering the different features required by classification and localization tasks, we further adopt a task decoupling strategy. IOFD is easy to train and end-to-end in terms of training and inference. The experimental results for datasets in two modalities outperform state-of-the-art algorithms, which demonstrates the effectiveness and generality of IOFD. The source code is available at https://github.com/mklz-sjy/IOFD

img7

 

[Paper ID: 1164] Opinions Vary? Diagnosis First!

Junde Wu, Huihui Fang,  Dalu Yang, Zhaowei Wang, Wenshuo Zhou, Fangxin Shang, Yehui Yang, Yanwu Xu; Computer Aided Diagnosis I Poster 1

 

With the advancement of deep learning techniques, an increasing number of methods have been proposed for optic disc and cup (OD/OC) segmentation from the fundus images. Clinically, OD/OC segmentation is often annotated by multiple clinical experts to mitigate the personal bias. However, it is hard to train the automated deep learning models on multiple labels. A common practice to tackle the issue is majority vote, e.g., taking the average of multiple labels. However such a strategy ignores the different expertness of medical experts. Motivated by the observation that OD/OC segmentation is often used for the glaucoma diagnosis clinically, in this paper, we propose a novel strategy to fuse the multi-rater OD/OC segmentation labels via the glaucoma diagnosis performance. Specifically, we assess the expertness of each rater through an attentive glaucoma diagnosis network. For each rater, its contribution for the diagnosis will be reflected as an expertness map. To ensure the expertness maps are general for different glaucoma diagnosis models, we further propose an Expertness Generator (ExpG) to eliminate the high-frequency components in the optimization process. Based on the obtained expertness maps, the multi-rater labels can be fused as a single ground-truth which we dubbed as Diagnosis First Ground-truth (DiagFirstGT). Experimental results show that by using DiagFirstGT as ground-truth, OD/OC segmentation networks will predict the masks with superior glaucoma diagnosis performance.

img8

 

[Paper ID: 1319] Learning self-calibrated optic disc and cup segmentation from multi-rater annotations

Learning self-calibrated optic disc and cup segmentation from multi-rater annotations

Junde Wu, Huihui Fang, Fangxin Shang, Zhaowei Wang, Dalu Yang, Wenshuo Zhou, Yehui Yang, Yanwu Xu; Image Segmentation, Registration & Reconstruction II, Poster 6

 

The segmentation of optic disc(OD) and optic cup(OC) from fundus images is an important fundamental task for glaucoma diagnosis. In the clinical practice, it is often necessary to collect opinions from multiple experts to obtain the final OD/OC annotation. This clinical routine helps to mitigate the individual bias. But when data is multiply annotated, standard deep learning models will be inapplicable. In this paper, we propose a novel neural network framework to learn OD/OC segmentation from multi-rater annotations. The segmentation results are self-calibrated through the iterative optimization of multi-rater expertness estimation and calibrated OD/OC segmentation. In this way, the proposed method can realize a mutual improvement of both tasks and finally obtain a refined segmentation result. Specifically, we propose Diverging Model(DivM) and Converging Model(ConM) to process the two tasks respectively. ConM segments the raw image based on the multi-rater expertness map provided by DivM. DivM generates multi-rater expertness map from the segmentation mask provided by ConM. The experiment results show that by recurrently running ConM and DivM, the results can be self-calibrated so as to outperform a range of state-of-the-art(SOTA) multi-rater segmentation methods.

img9

 

[Paper ID:1318] TBraTS: Trusted Brain Tumor Segmentation", MICCAI, 2022

Ke Zou, Xuedong Yuan, Xiaojing Shen, Meng Wang, and Huazhu Fu; POSTER WV-8-PC35

 

Despite recent improvements in the accuracy of brain tumor segmentation, the results still exhibit low levels of confidence and robustness. Uncertainty estimation is one effffective way to change this situation, as it provides a measure of confidence in the segmentation results.

In this paper, we propose a trusted brain tumor segmentation network which can generate robust segmentation results and reliable uncertainty estimations without excessive computational burden and modifification of the backbone network. In our method, uncertainty is modeled explicitly using subjective logic theory, which treats the predictions of backbone neural network as subjective opinions by parameterizing the class probabilities of the segmentation as a Dirichlet distribution. Meanwhile, the trusted segmentation framework learns the function that gathers reliable evidence from the feature leading to the final segmentation results. Overall, our unifified trusted segmentation framework endows the model with reliability and robustness to out-of-distribution samples. To evaluate the effectiveness of our model in robustness and reliability, qualitative and quantitative experiments are conducted on the BraTS 2019 dataset.

img10

 

[PaperID: 1699]Screening of Dementia on OCTA Images via Multi-projection Consistency and Complementarity

Wang, Xingyue VirtualComputer Aided Diagnosis I Poster 1

 

It has been suggested that the retinal vasculature alternations are associated with dementia in recent clinical studies, and the eye examination may facilitate the early screening of dementia. Optical Coherence Tomography Angiography (OCTA) has shown its superiority in visualizing superficial vascular complex (SVC), deep vascular complex (DVC), and choriocapillaris, and it has been extensively used in clinical practice. However, the information in OCTA is far from fully mined by existing methods, which straightforwardly analyze the multiple projections of OCTA by average or concatenation. These methods do not take into account the relationship between multiple projections. Accordingly, a Multi-projection Consistency and complementarity Learning Network (MUCO-Net) is proposed in this paper to explore the diagnosis of dementia based on OCTA. Firstly, a consistency and complementarity attention (CsCp) module is developed to understand the complex relationships among various projections. Then, a cross-view fusion (CVF) module is introduced to combine the multi-scale features from the CsCp. In addition, the number of input flows of the proposed modules is flexible to boost the interactions across the features from different projections. In the experiment, MUCO-Net is implemented on two OCTA datasets to screen for dementia and diagnose fundus diseases. The effectiveness of MUCO-Net is demonstrated by its superior performance to state-of-the-art methods.

C:\Users\zkti\AppData\Local\Temp\WeChat Files\9e4597ece3f8b309a96b9a9a7914cec.jpg

 

[Paper ID:1826] SeATrans: Learning Segmentation-Assisted diagnosis model via Transformer

Junde Wu, Huihui Fang, Fangxin Shang, Dalu Yang, Zhaowei Wang, Jing Gao, Yehui Yang, Yanwu Xu; Image Segmentation, Registration & Reconstruction III, Poster 7

 

Clinically, the accurate annotation of lesions/tissues can significantly facilitate the disease diagnosis. For example, the segmentation of optic disc/cup (OD/OC) on fundus image would facilitate the glaucoma diagnosis, the segmentation of skin lesions on dermoscopic images is helpful to the melanoma diagnosis, etc. With the advancement of deep learning techniques, a wide range of methods proved the lesions/tissues segmentation can also facilitate the automated disease diagnosis models. However, existing methods are limited in the sense that they can only capture static regional correlations in the images. Inspired by the global and dynamic nature of Vision Transformer, in this paper, we propose Segmentation-Assisted diagnosis Transformer (SeATrans) to transfer the segmentation knowledge to the disease diagnosis network. Specifically, we first propose an asymmetric multi-scale interaction strategy to correlate each single low-level diagnosis feature with multi-scale segmentation features. Then, an effective strategy called SeA-block is adopted to vitalize diagnosis feature via correlated segmentation features. To model the segmentation-diagnosis interaction, SeA-block first embeds the diagnosis feature based on the segmentation information via the encoder, and then transfers the embedding back to the diagnosis feature space by a decoder. Experimental results demonstrate that SeATrans surpasses a wide range of state-of-the-art (SOTA) segmentation-assisted diagnosis methods on several disease diagnosis tasks.

img12

 

研讨会和挑战赛概述

第九届眼科医学图像分析研讨会(9th MICCAI Workshop on Ophthalmic Medical Image AnalysisOMIA9)将在MICCAI2022期间举行。OMIA研讨会旨在将来自不断发展的眼科图像分析界的多个学科(如电子工程、计算机科学、数学和医学)的科学家、临床医生和学生聚集在一起,讨论该领域的最新进展。OMIA研讨会目前已成功举办8届,已成为全球最受广泛认可的眼科影像AI研讨会和社区。今年OMIA9上,将有8篇论文的口头汇报和16篇论文的海报展示。

同时,我们主办了眼科相关的MICCAI挑战赛——青光眼OCT分析和层分割(Glaucoma Oct Analysis and Layer SegmentationGOALS)。GOALS挑战赛围绕OCT图像设计了眼底结构层分割和青光眼自动识别两个任务。挑战赛吸引了400+队伍报名,最终15名队伍进入决赛。挑战赛最终结果将在OMIA9研讨会上公布。

img13

 GOALS 挑战赛主页二维码和任务示意

GOALS挑战赛属于iChallenge系列。该系列为iMED许言午博士(百度智慧医疗科学家)联合中山大学中山眼科中心张秀兰教授团队于2018年启动的国际眼科挑战赛。iChallenge旨在分享大数量高质量的精细标注眼科影像数据,以加强不同研究人员之间的沟通,并促进AI算法在诊断和图像分析中的发展。目前该比赛已发展成为国际大规模、最权威眼科医学影像分析比赛。截至目前,iChallenge系列已成功举办7场与青光眼、老年性黄斑病变、病理性近视等眼疾辅助诊断相关的国际赛事,释放标注数据1w+,发表医学图像处理顶刊论文3篇。

img14

 iChallenge系列挑战赛时间线

The 9th MICCAI Workshop on Ophthalmic Medical Image Analysis (OMIA9) will be held during MICCAI2022. OMIA workshop aims to bring together scientists, clinicians and students from multiple disciplines in the growing ophthalmic image analysis community, such as electronic engineering, computer science, mathematics, and medicine, to discuss the latest advancements in the field. The OMIA Workshop has been successfully held for 8 years and has become the most widely recognized ophthalmic imaging AI workshop and community in the world. At OMIA9 this year, there will be 8 oral presentations and 16 poster presentations.

At the same time, we host the ophthalmology-related MICCAI challenge -- Glaucoma OCT Analysis and Layer Segmentation (GOALS). GOALS Challenge designed two tasks of ocular fundus structure layer segmentation and glaucoma automatic recognition based on OCT images. The challenge attracted more than 400 teams, and 15 teams entered the final. The final results of the challenge will be announced at the OMIA9 on Sep.22nd.

The GOALS Challenge is part of the iChallenge series. The iChallenge is an international ophthalmic Challenge launched in 2018 by iMed Dr. Xu Yanwu (a Baidu intelligent healthcare scientist), and Prof. Zhang Xiulan from Zhongshan Ophthalmic Center of Sun Yat-sen University. iChallenge aims to share large amounts of high-quality finely labeled ophthalmic imaging datasets to enhance communication between researchers and facilitate the development of AI algorithms in diagnosis and image analysis. Up to now, iChallenge series has successfully held 6 international challenges related to the auxiliary diagnosis of glaucoma, age-related macular disease, pathological myopia, and other eye diseases, released 1w + labeled data, and published 3 challenge review papers in top journals in the field of medical image processing.

 

谢谢关注iMED课题组!欢迎垂询www.imed-lab.com

iMED深圳与iMED宁波长期寻找:

·       研究助理/副教授、副研究员、博士后和工程人员等;

·       博士硕士研究生、实习生等。