3260 papers • 126 benchmarks • 313 datasets
Sign Language Recognition is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers. ( Image credit: Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison )
(Image credit: Open Source)
These leaderboards are used to track progress in sign-language-recognition-17
Use these libraries to find sign-language-recognition-17 models and implementations
No datasets available.
No subtasks available.
Adding a benchmark result helps the community track progress.