CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding.
Zhiyuan Liu
34 papers
Juan-Zi Li
8 papers
Hao Zhou
5 papers
Xiaoyan Zhu
6 papers
Maosong Sun
46 papers
Fanchao Qi
10 papers
Jian Guan
6 papers
Xiaozhi Wang
9 papers
Yujia Qin
6 papers
Pei Ke
4 papers
Yuxian Gu
4 papers
Guoyang Zeng
3 papers
Deming Ye
3 papers
Yanan Zheng
3 papers
Zhengyan Zhang
3 papers
Yusheng Su
2 papers
Haozhe Ji
1 papers
Huanqi Cao
1 papers
S. Chen
1 papers
Daixuan Li
1 papers
Zhenbo Sun
1 papers
Wentao Han
1 papers