Breaking with Fixed Set Pathology Recognition Through Report-Guided Contrastive Training

Abstract

When reading images, radiologists generate text reports describing the findings therein. Current state-of-the-art computer-aided diagnosis tools utilize a fixed set of predefined categories automatically extracted from these medical reports for training. This form of supervision limits the potential usage of models as they are unable to pick up on anomalies outside of their predefined set, thus, making it a necessity to retrain the classifier with additional data when faced with novel classes. In contrast, we investigate direct text supervision to break away from this closed set assumption. By doing so, we avoid noisy label extraction via text classifiers and incorporate more contextual information. We employ a contrastive global-local dual-encoder architecture to learn concepts directly from unstructured medical reports while maintaining its ability to perform free form classification. We investigate relevant properties of open set recognition for radiological data and propose a method to employ currently weakly annotated data into training. We evaluate our approach on the large-scale chest X-Ray datasets MIMIC-CXR, CheXpert, and ChestX-Ray14 for disease classification. We show that despite using unstructured medical report supervision, we perform on par with direct label supervision through a sophisticated inference setting.

Publication
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022
Constantin Seibold
Constantin Seibold
Team Lead Computer Vision
Rainer Stiefelhagen
Rainer Stiefelhagen
Director Computer Vision for Human-Computer Interaction Lab
Jens Kleesiek
Jens Kleesiek
Professor of Translational Image-guided Oncology