Med-Tuning: Exploring Parameter-Efficient Transfer Learning for Medical Volumetric Segmentation

Wenxuan Wang, Jiachen Shen, Chen Chen, Jianbo Jiao, Yan Zhang, Shanshan Song, Jiangyun Li

Research output: Working paper/PreprintPreprint

97 Downloads (Pure)

Abstract

Deep learning based medical volumetric segmentation methods either train the model from scratch or follow the standard "pre-training then finetuning" paradigm. Although finetuning a well pre-trained model on downstream tasks can harness its representation power, the standard full finetuning is costly in terms of computation and memory footprint. In this paper, we present the first study on parameter-efficient transfer learning for medical volumetric segmentation and propose a novel framework named Med-Tuning based on intra-stage feature enhancement and inter-stage feature interaction. Given a large-scale pre-trained model on 2D natural images, our method can exploit both the multi-scale spatial feature representations and temporal correlations along image slices, which are crucial for accurate medical volumetric segmentation. Extensive experiments on three benchmark datasets (including CT and MRI) show that our method can achieve better results than previous state-of-the-art parameter-efficient transfer learning methods and full finetuning for the segmentation task, with much less tuned parameter costs. Compared to full finetuning, our method reduces the finetuned model parameters by up to 4x, with even better segmentation performance.
Original languageEnglish
PublisherarXiv
DOIs
Publication statusPublished - 21 Apr 2023

Fingerprint

Dive into the research topics of 'Med-Tuning: Exploring Parameter-Efficient Transfer Learning for Medical Volumetric Segmentation'. Together they form a unique fingerprint.

Cite this