Cross-Task Representation Learning for Anatomical Landmark Detection

Zeyu Fu, Jianbo Jiao, Michael Suttie, J. Alison Noble

Research output: Working paper/PreprintPreprint

16 Downloads (Pure)

Abstract

Recently, there is an increasing demand for automatically detecting anatomical landmarks which provide rich structural information to facilitate subsequent medical image analysis. Current methods related to this task often leverage the power of deep neural networks, while a major challenge in fine tuning such models in medical applications arises from insufficient number of labeled samples. To address this, we propose to regularize the knowledge transfer across source and target tasks through cross-task representation learning. The proposed method is demonstrated for extracting facial anatomical landmarks which facilitate the diagnosis of fetal alcohol syndrome. The source and target tasks in this work are face recognition and landmark detection, respectively. The main idea of the proposed method is to retain the feature representations of the source model on the target task data, and to leverage them as an additional source of supervisory signals for regularizing the target model learning, thereby improving its performance under limited training samples. Concretely, we present two approaches for the proposed representation learning by constraining either final or intermediate model features on the target model. Experimental results on a clinical face image dataset demonstrate that the proposed approach works well with few labeled data, and outperforms other compared approaches.
Original languageEnglish
Publication statusPublished - 28 Sept 2020

Bibliographical note

MICCAI-MLMI 2020

Keywords

  • cs.CV
  • cs.LG
  • eess.IV

Fingerprint

Dive into the research topics of 'Cross-Task Representation Learning for Anatomical Landmark Detection'. Together they form a unique fingerprint.

Cite this