This work proposes a novel weakly supervised contrastive loss for medical report generation that outperforms previous work on both clinical correctness and text generation metrics for two public benchmarks.
Radiology report generation aims at generating descriptive text from radiology images automatically, which may present an opportunity to improve radiology reporting and interpretation. A typical setting consists of training encoder-decoder models on image-report pairs with a cross entropy loss, which struggles to generate informative sentences for clinical diagnoses since normal findings dominate the datasets. To tackle this challenge and encourage more clinically-accurate text outputs, we propose a novel weakly supervised contrastive loss for medical report generation. Experimental results demonstrate that our method benefits from contrasting target reports with incorrect but semantically-close ones. It outperforms previous work on both clinical correctness and text generation metrics for two public benchmarks.
An Yan
2 papers
Zexue He
1 papers
Xing Lu
1 papers
Jingfeng Du
1 papers
Amilcare Gentili
1 papers