ModChemBERT: ModernBERT as a Chemical Language Model
ModChemBERT is a ModernBERT-based chemical language model (CLM), trained on SMILES strings for masked language modeling (MLM) and downstream molecular property prediction (classification & regression).
Usage
Load Model
from transformers import AutoModelForMaskedLM, AutoTokenizer
model_id = "Derify/ModChemBERT-MLM-TAFT"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(
model_id,
trust_remote_code=True,
dtype="float16",
device_map="auto",
)
Fill-Mask Pipeline
from transformers import pipeline
fill = pipeline("fill-mask", model=model, tokenizer=tokenizer)
print(fill("c1ccccc1[MASK]"))
Intended Use
Primary: Research and development for molecular property prediction, experimentation with pooling strategies, and as a foundational model for downstream applications.
Appropriate for: Binary / multi-class classification (e.g., toxicity, activity) and single-task or multi-task regression (e.g., solubility, clearance) after fine-tuning.
Not intended for generating novel molecules.
Limitations
Out-of-domain performance may degrade for: very long (>128 token) SMILES, inorganic / organometallic compounds, polymers, or charged / enumerated tautomers are not well represented in training.
No guarantee of synthesizability, safety, or biological efficacy.
Ethical Considerations & Responsible Use
Potential biases arise from training corpora skewed to drug-like space.
Do not deploy in clinical or regulatory settings without rigorous, domain-specific validation.
Architecture
Backbone: ModernBERT
Hidden size: 768
Intermediate size: 1152
Encoder Layers: 22
Attention heads: 12
Max sequence length: 256 tokens (MLM primarily trained with 128-token sequences)
Kallergis et al. [1] demonstrated that the CLM embedding method prior to the prediction head can significantly impact downstream performance.
Behrendt et al. [2] noted that the last few layers contain task-specific information and that pooling methods leveraging information from multiple layers can enhance model performance. Their results further demonstrated that the
max_seq_mha
pooling method was particularly effective in low-data regimes, which is often the case for molecular property prediction tasks.
Multiple pooling strategies are supported by ModChemBERT to explore their impact on downstream performance:
cls
: Last layer [CLS]
mean
: Mean over last hidden layer
max_cls
: Max over last k layers of [CLS]
cls_mha
: MHA with [CLS] as query
max_seq_mha
: MHA with max pooled sequence as KV and max pooled [CLS] as query
sum_mean
: Sum over all layers then mean tokens
sum_sum
: Sum over all layers then sum tokens
mean_mean
: Mean over all layers then mean tokens
mean_sum
: Mean over all layers then sum tokens
max_seq_mean
: Max over last k layers then mean tokens
Training Pipeline
Rationale for MTR Stage
Following Sultan et al. [3], multi-task regression (physicochemical properties) biases the latent space toward ADME-related representations prior to narrow TAFT specialization. Sultan et al. observed that MLM + DAPT (MTR) outperforms MLM-only, MTR-only, and MTR + DAPT (MTR).
Checkpoint Averaging Motivation
Inspired by ModernBERT [4], JaColBERTv2.5 [5], and Llama 3.1 [6], where results show that model merging can enhance generalization or performance while mitigating overfitting to any single fine-tune or annealing checkpoint.
Bold
indicates the best result in the column;
italic
indicates the best result among ModChemBERT checkpoints.
* Published results from the ChemBERTa-3 [7] paper for optimized chemical language models using DeepChem scaffold splits.
† AVG column shows the mean score across all classification tasks.
‡ AVG column shows the mean scores across all regression tasks without and with the clearance score.
Optimized ModChemBERT Hyperparameters
Click to expand
TAFT Datasets
Optimal parameters (per dataset) for the
MLM + DAPT + TAFT OPT
merged model:
Dataset
Learning Rate
Batch Size
Warmup Ratio
Classifier Pooling
Last k Layers
adme_microsom_stab_h
3e-5
8
0.0
max_seq_mean
5
adme_microsom_stab_r
3e-5
16
0.2
max_cls
3
adme_permeability
3e-5
8
0.0
max_cls
3
adme_ppb_h
1e-5
32
0.1
max_seq_mean
5
adme_ppb_r
1e-5
32
0.0
sum_mean
N/A
adme_solubility
3e-5
32
0.0
sum_mean
N/A
astrazeneca_CL
3e-5
8
0.1
max_seq_mha
3
astrazeneca_LogD74
1e-5
8
0.0
max_seq_mean
5
astrazeneca_PPB
1e-5
32
0.0
max_cls
3
astrazeneca_Solubility
1e-5
32
0.0
max_seq_mean
5
Benchmarking Datasets
Optimal parameters (per dataset) for the
MLM + DAPT + TAFT OPT
merged model:
Dataset
Batch Size
Classifier Pooling
Last k Layers
Pooling Attention Dropout
Classifier Dropout
Embedding Dropout
bace_classification
32
max_seq_mha
3
0.0
0.0
0.0
bbbp
64
max_cls
3
0.1
0.0
0.0
clintox
32
max_seq_mha
5
0.1
0.0
0.0
hiv
32
max_seq_mha
3
0.0
0.0
0.0
sider
32
mean
N/A
0.1
0.0
0.1
tox21
32
max_seq_mha
5
0.1
0.0
0.0
base_regression
32
max_seq_mha
5
0.1
0.0
0.0
clearance
32
max_seq_mha
5
0.1
0.0
0.0
esol
64
sum_mean
N/A
0.1
0.0
0.1
freesolv
32
max_seq_mha
5
0.1
0.0
0.0
lipo
32
max_seq_mha
3
0.1
0.1
0.1
Hardware
Training and experiments were performed on 2 NVIDIA RTX 3090 GPUs.
Citation
If you use ModChemBERT in your research, please cite the checkpoint and the following:
@software{cortes-2025-modchembert,
author = {Emmanuel Cortes},
title = {ModChemBERT: ModernBERT as a Chemical Language Model},
year = {2025},
publisher = {GitHub},
howpublished = {GitHub repository},
url = {https://github.com/emapco/ModChemBERT}
}
References
Kallergis, Georgios, et al. "Domain adaptable language modeling of chemical compounds identifies potent pathoblockers for Pseudomonas aeruginosa." Communications Chemistry 8.1 (2025): 114.
Behrendt, Maike, Stefan Sylvius Wagner, and Stefan Harmeling. "MaxPoolBERT: Enhancing BERT Classification via Layer-and Token-Wise Aggregation." arXiv preprint arXiv:2505.15696 (2025).
Sultan, Afnan, et al. "Transformers for molecular property prediction: Domain adaptation efficiently improves performance." arXiv preprint arXiv:2503.03360 (2025).
Warner, Benjamin, et al. "Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference." arXiv preprint arXiv:2412.13663 (2024).
Clavié, Benjamin. "JaColBERTv2.5: Optimising Multi-Vector Retrievers to Create State-of-the-Art Japanese Retrievers with Constrained Resources." Journal of Natural Language Processing 32.1 (2025): 176-218.
Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
Singh, Riya, et al. "ChemBERTa-3: An Open Source Training Framework for Chemical Foundation Models." (2025).
Runs of Derify ModChemBERT-MLM-TAFT on huggingface.co
4
Total runs
0
24-hour runs
0
3-day runs
-2
7-day runs
0
30-day runs
More Information About ModChemBERT-MLM-TAFT huggingface.co Model
ModChemBERT-MLM-TAFT huggingface.co is an AI model on huggingface.co that provides ModChemBERT-MLM-TAFT's model effect (), which can be used instantly with this Derify ModChemBERT-MLM-TAFT model. huggingface.co supports a free trial of the ModChemBERT-MLM-TAFT model, and also provides paid use of the ModChemBERT-MLM-TAFT. Support call ModChemBERT-MLM-TAFT model through api, including Node.js, Python, http.
ModChemBERT-MLM-TAFT huggingface.co is an online trial and call api platform, which integrates ModChemBERT-MLM-TAFT's modeling effects, including api services, and provides a free online trial of ModChemBERT-MLM-TAFT, you can try ModChemBERT-MLM-TAFT online for free by clicking the link below.
Derify ModChemBERT-MLM-TAFT online free url in huggingface.co:
ModChemBERT-MLM-TAFT is an open source model from GitHub that offers a free installation service, and any user can find ModChemBERT-MLM-TAFT on GitHub to install. At the same time, huggingface.co provides the effect of ModChemBERT-MLM-TAFT install, users can directly use ModChemBERT-MLM-TAFT installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
ModChemBERT-MLM-TAFT install url in huggingface.co: