This work introduces a new laparoscopic dataset, CholecT40, consisting of 40 videos from the public dataset Cholesc80 in which all frames have been annotated using 128 triplet classes and proposes a trainable 3D interaction space, which captures the associations between the triplet components.
Recognition of surgical activity is an essential component to develop context-aware decision support for the operating room. In this work, we tackle the recognition of fine-grained activities, modeled as action triplets \(\langle instrument, verb, target \rangle \) representing the tool activity. To this end, we introduce a new laparoscopic dataset, CholecT40, consisting of 40 videos from the public dataset Cholec80 in which all frames have been annotated using 128 triplet classes. Furthermore, we present an approach to recognize these triplets directly from the video data. It relies on a module called class activation guide, which uses the instrument activation maps to guide the verb and target recognition. To model the recognition of multiple triplets in the same frame, we also propose a trainable 3D interaction space, which captures the associations between the triplet components. Finally, we demonstrate the significance of these contributions via several ablation studies and comparisons to baselines on CholecT40.
J. Marescaux
7 papers
C. Nwoye
8 papers
Tong Yu
6 papers
Cristians Gonzalez
4 papers