The overall design, example models for each task, and performance benchmarking behind ESPnet-ST-v2 are described, which is publicly available at https://github.com/espnet/esp net.
ESPnet-ST-v2 is a revamp of the open-source ESPnet-ST toolkit necessitated by the broadening interests of the spoken language translation community. ESPnet-ST-v2 supports 1) offline speech-to-text translation (ST), 2) simultaneous speech-to-text translation (SST), and 3) offline speech-to-speech translation (S2ST) – each task is supported with a wide variety of approaches, differentiating ESPnet-ST-v2 from other open source spoken language translation toolkits. This toolkit offers state-of-the-art architectures such as transducers, hybrid CTC/attention, multi-decoders with searchable intermediates, time-synchronous blockwise CTC/attention, Translatotron models, and direct discrete unit models. In this paper, we describe the overall design, example models for each task, and performance benchmarking behind ESPnet-ST-v2, which is publicly available at https://github.com/espnet/espnet.
Zhaoheng Ni
2 papers
Xiaohui Zhang
2 papers
Yifan Peng
2 papers
Patrick Fernandes
2 papers
Brian Yan
1 papers
Jiatong Shi
2 papers
Yun Tang
2 papers
H. Inaguma
2 papers
Peter Pol'ak
1 papers
Dan Berrebbi
1 papers
Tomoki Hayashi
2 papers
Moto Hira
1 papers
Soumi Maiti
1 papers