An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
Vocabulary
We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
Position Embeddings
We extend the max_position_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
AFQMC
IFLYTEK
CSL-sum
LCSTS
AVG
Previous
bart-base
73.0
60
62.1
37.8
58.23
cpt-base
75.1
60.5
63.0
38.2
59.20
bart-large
75.7
62.1
64.2
40.6
60.65
cpt-large
75.9
61.8
63.7
42.0
60.85
Updataed
bart-base
73.03
61.25
61.51
38.78
58.64
cpt-base
74.40
61.23
62.09
38.81
59.13
bart-large
75.81
61.52
64.62
40.90
60.71
cpt-large
75.97
61.63
63.83
42.08
60.88
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
Note that to use updated models, please update the
modeling_cpt.py
(new version download
Here
) and the vocabulary (refresh the cache).
Model description
This is an implementation of CPT-Base. To use CPT, please import the file
modeling_cpt.py
(
Download
Here
) that define the architecture of CPT into your project.
Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.
Citation
@article{shao2021cpt,
title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation},
author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu},
journal={arXiv preprint arXiv:2109.05729},
year={2021}
}
Runs of fnlp cpt-base on huggingface.co
18
Total runs
0
24-hour runs
-15
3-day runs
-16
7-day runs
-9
30-day runs
More Information About cpt-base huggingface.co Model
cpt-base huggingface.co
cpt-base huggingface.co is an AI model on huggingface.co that provides cpt-base's model effect (), which can be used instantly with this fnlp cpt-base model. huggingface.co supports a free trial of the cpt-base model, and also provides paid use of the cpt-base. Support call cpt-base model through api, including Node.js, Python, http.
cpt-base huggingface.co is an online trial and call api platform, which integrates cpt-base's modeling effects, including api services, and provides a free online trial of cpt-base, you can try cpt-base online for free by clicking the link below.
cpt-base is an open source model from GitHub that offers a free installation service, and any user can find cpt-base on GitHub to install. At the same time, huggingface.co provides the effect of cpt-base install, users can directly use cpt-base installed effect in huggingface.co for debugging and trial. It also supports api for free installation.