CellViT: Vision Transformers for Precise Cell Segmentation and Classification

Abstract

Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated Nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.51 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT

Publication
arXiv
Fabian Hörst
Fabian Hörst
PhD Student
Moritz Rempe
Moritz Rempe
PhD Student
Lukas Heine
Lukas Heine
PhD Student
Constantin Seibold
Constantin Seibold
Team Lead Computer Vision
Julius Keyl
Julius Keyl
Medical Doctor
Jan Egger
Jan Egger
Team Lead AI-guided Therapies
Jens Kleesiek
Jens Kleesiek
Professor of Translational Image-guided Oncology