Handcrafted histological transformer (H2T): unsupervised representation of whole slide images

Quoc Dang Vu, Kashif Rajpoot, Shan E.Ahmed Raza, Nasir Rajpoot*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

100 Downloads (Pure)

Abstract

Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.

Original languageEnglish
Article number102743
Number of pages18
JournalMedical Image Analysis
Volume85
Early online date19 Jan 2023
DOIs
Publication statusPublished - Apr 2023

Bibliographical note

Funding Information:
We thank Rob Jewsbury and Simon Graham for their invaluable feedback during the write-up of the manuscript. Quoc Dang Vu is funded by The Royal Marsden NHS Foundation Trust . NR and SR are part of the PathLAKE digital pathology consortium, which is partly funded from the Data to Early Diagnosis and Precision Medicine strand of the governments Industrial Strategy Challenge Fund, managed and delivered by UK Research and Innovation (UKRI) . NR and SR are also funded by the European Research Council (funding call H2020 IMI2-RIA ). NR was also supported by the UK Medical Research Council (grant award MR/P015476/1 ), Royal Society Wolfson Merit Award and the Alan Turing Institute.

Publisher Copyright:
© 2023 The Author(s)

Keywords

  • Computational pathology
  • Deep learning
  • Transformer
  • Unsupervised learning
  • WSI representation

ASJC Scopus subject areas

  • Radiological and Ultrasound Technology
  • Radiology Nuclear Medicine and imaging
  • Computer Vision and Pattern Recognition
  • Health Informatics
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'Handcrafted histological transformer (H2T): unsupervised representation of whole slide images'. Together they form a unique fingerprint.

Cite this