cpt-en-base is an English biomedical encoder built by continued pretraining of
ModernBERT
using a
CLM detour
recipe. Instead of standard MLM continued pretraining, we temporarily switch to causal language modeling (CLM) before returning to MLM. This produces lasting representational changes in early transformer layers that improve downstream biomedical performance.
cpt-en-base achieves
78.0% average F1
across 11 English biomedical benchmarks (5 Clinical + 6 BigBIO), the highest balanced score across both task families.
You can use this model with the
transformers
library (v4.48.0+):
pip install -U transformers>=4.48.0
If your GPU supports it, install Flash Attention for best efficiency:
pip install flash-attn
Masked Language Modeling
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = "rntc/cpt-en-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)
text = "The patient was diagnosed with [MASK] and started on antibiotics."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)
Fine-tuning (Classification, NER, etc.)
from transformers import AutoTokenizer, AutoModel
model_id = "rntc/cpt-en-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id)
text = "The patient presented with acute myocardial infarction and was treated with percutaneous coronary intervention."
inputs = tokenizer(text, return_tensors="pt", max_length=8192, truncation=True)
outputs = model(**inputs)
# outputs.last_hidden_state: [batch, seq_len, 768]
Note:
cpt-en-base does not use token type IDs. You can omit the
token_type_ids
parameter.
Training
Data
Corpus
Proportion
Description
PubMed
60%
Biomedical abstracts
Med-Inst
20%
Medical instructions
MIMIC
20%
Clinical notes
Total
50B tokens
Single epoch
Methodology
cpt-en-base is trained in two phases, initialized from
ModernBERT-base
:
Phase 1 — CLM detour (50B tokens):
The bidirectional attention mask is replaced with a causal mask, and the model is trained with next-token prediction. This dense training signal (100% of positions) deeply modifies early transformer layers for domain adaptation.
Phase 2 — MLM decay (5B tokens):
Bidirectional attention is restored, and the model is trained with masked language modeling at 15% masking. The learning rate decays from peak to 10% following a 1-sqrt schedule.
Both phases use the same data mix. Training used AdamW (lr=2e-4, beta1=0.9, beta2=0.98), bf16 mixed precision, global batch size of 384 sequences (~3.1M tokens), on 4x H100 GPUs with
Composer
.
Why a CLM Detour?
CLM supervises every token position, producing dense gradient updates that deeply modify early transformer layers (layers 0-7). These changes persist through the MLM decay phase — a phenomenon we call
computational hysteresis
. We provide causal evidence through freeze interventions showing that early-layer modification is both necessary and sufficient for the CLM benefit (double dissociation). See our paper for the full mechanistic analysis.
Evaluation
English biomedical benchmark results (11 tasks, 5 seeds per model):
Clinical Tasks
Model
Ctx
ChemProt
Phenotype
COS
Social Hist.
DEID
Avg
cpt-en-base
8192
90.1
61.9
95.2
54.2
83.2
76.9
BioClinical-ModernBERT
8192
90.0
60.7
94.8
56.0
81.8
76.7
PubMedBERT
512
90.2
52.0
95.0
48.7
80.4
73.3
ModernBERT-base
8192
89.5
48.4
94.0
53.1
78.3
72.7
BigBIO Tasks
Model
Ctx
AnatEM
BC5CDR
JNLPBA
NCBI
GAD
HoC
Avg
cpt-en-base
8192
81.0
89.1
74.5
80.1
78.8
70.0
78.9
BioClinical-ModernBERT
8192
79.2
88.7
74.8
78.7
75.8
67.0
77.4
PubMedBERT
512
83.3
89.7
74.9
82.1
79.3
71.0
80.1
ModernBERT-base
8192
77.2
87.9
74.3
77.7
76.8
66.6
76.8
Overall
Model
Clinical
BigBIO
Overall
cpt-en-base
76.9
78.9
78.0
BioClinical-ModernBERT
76.7
77.4
77.0
PubMedBERT
73.3
80.1
77.0
ModernBERT-base
72.7
76.8
74.9
cpt-en-base achieves the highest balanced score (78.0%) across both Clinical and BigBIO task families. PubMedBERT scores higher on short-context BigBIO NER tasks but falls behind on long-context tasks (Phenotype: 52.0% vs 61.9%).
Intended Use
This model is designed for English biomedical and clinical NLP tasks:
Named entity recognition (diseases, chemicals, genes, anatomy)
Information extraction from PubMed abstracts and clinical reports
The 8,192-token context is important for long clinical documents (discharge summaries, pathology reports) that are truncated by 512-token models.
Limitations
Trained on English biomedical text; not suitable for other languages without further adaptation. See
cpt-fr-base
for French.
Encoder model: produces contextualized representations, does not generate text.
Clinical text may contain sensitive patterns; users are responsible for compliance with applicable regulations (HIPAA, etc.).
The English CLM-MLM improvement (+0.3pp at Base scale) is smaller than in French (+2.9pp) and not statistically significant at Base scale (binomial p=0.27). The practical benefit is clearest at Large scale (+0.8pp) and on long-context tasks.
cpt-en-base huggingface.co is an AI model on huggingface.co that provides cpt-en-base's model effect (), which can be used instantly with this rntc cpt-en-base model. huggingface.co supports a free trial of the cpt-en-base model, and also provides paid use of the cpt-en-base. Support call cpt-en-base model through api, including Node.js, Python, http.
cpt-en-base huggingface.co is an online trial and call api platform, which integrates cpt-en-base's modeling effects, including api services, and provides a free online trial of cpt-en-base, you can try cpt-en-base online for free by clicking the link below.
rntc cpt-en-base online free url in huggingface.co:
cpt-en-base is an open source model from GitHub that offers a free installation service, and any user can find cpt-en-base on GitHub to install. At the same time, huggingface.co provides the effect of cpt-en-base install, users can directly use cpt-en-base installed effect in huggingface.co for debugging and trial. It also supports api for free installation.