Trinity-Large-Base is a pretrained foundation model from Arcee AI's Trinity Large training run. It is a 398B-parameter sparse Mixture-of-Experts (MoE) model with approximately 13B active parameters per token. The checkpoint was captured after 17 trillion tokens of pretraining, including mid-training learning-rate anneals and context extension, but prior to any instruction tuning or reinforcement learning.
This checkpoint represents the completed pretraining phase and serves as a foundation for research and downstream fine-tuning.
More details on the training of Trinity Large are available in the
technical report
.
Model Variants
The Trinity Large family consists of three checkpoints from the same training run:
Trinity-Large-Base
(this release): Full 17T-token pretrained foundation model with mid-training anneals
Trinity-Large-Base uses a sparse MoE configuration designed to maximize efficiency while maintaining large-scale capacity.
Hyperparameter
Value
Total parameters
~398B
Active parameters per token
~13B
Experts
256
Active experts
4
Routing strategy
4-of-256 (1.56% sparsity)
Dense layers
6
Pretraining context length
8,192
Context length after extention
512k
Architecture
Sparse MoE (AfmoeForCausalLM)
Benchmark Results
Benchmark
N-shot
Metric
Score
Stderr
mbpp_plus
3
pass_at_1,none
0.8862
±0.0164
minerva_math500
4
math_verify,none
0.6520
±0.0213
hellaswag_5shot
5
acc_norm,none
0.9011
±0.0030
winogrande_5shot
5
acc,none
0.8082
±0.0111
mmlu_5shot
5
acc,none
0.8258
±0.0031
mmlu_generative_5shot
5
exact_match,get_response
0.8260
±0.0031
mmlu_pro
5
exact_match,custom-extract
0.6602
±0.0042
triviaqa_5shot
5
exact_match,remove_whitespace
0.8330
±0.0028
arc_challenge_0shot
0
acc_norm,none
0.6544
±0.0139
bbh_fewshot
3
exact_match,remove_whitespace
0.6570
±0.0051
gpqa_diamond_5shot
5
acc_norm,none
0.4394
±0.0354
gsm8k_cot
8
exact_match,flexible-extract
0.9136
±0.0077
Training Configuration
Pretraining
Training tokens: 17 trillion
Checkpoint type: Post-anneal (foundation)
Instruction data: None
RLHF or post-training: None
This checkpoint represents the final pretrained state after completion of the pretraining phase, including mid-training learning-rate anneals, but before instruction tuning or reinforcement learning.
Optimizers
Optimizer learning rates during WSD stable phase:
Adam learning rate: 2e-4
Muon learning rate: 8e-4
Muon was used to support larger critical batch sizes in a highly sparse MoE regime.
Studying emergent behavior from large-scale pretraining
Sparse MoE routing and load-balancing research
Interpretability, probing, and ablation studies
Domain-specific fine-tuning from a pretrained foundation
Academic and industrial foundation model research
Comparison with TrueBase
Trinity-Large-Base includes an additional 7 trillion training tokens compared to Trinity-Large-TrueBase, along with mid-training learning-rate anneals. These anneals stabilize training dynamics and typically improve downstream fine-tuning performance compared to the pre-anneal checkpoint. Researchers studying raw pretraining dynamics may prefer TrueBase, while those seeking a foundation for fine-tuning may prefer this checkpoint.
Known Limitations
Not aligned for safety, helpfulness, or conversational tone
Requires substantial compute and expertise to fine-tune
May exhibit raw or unstable behaviors typical of unaligned models
No extended-context tuning beyond the 8K pretraining window
License
Trinity-Large-Base is released under the Apache License, Version 2.0.
Runs of arcee-ai Trinity-Large-Base on huggingface.co
441
Total runs
0
24-hour runs
0
3-day runs
14
7-day runs
159
30-day runs
More Information About Trinity-Large-Base huggingface.co Model
Trinity-Large-Base huggingface.co is an AI model on huggingface.co that provides Trinity-Large-Base's model effect (), which can be used instantly with this arcee-ai Trinity-Large-Base model. huggingface.co supports a free trial of the Trinity-Large-Base model, and also provides paid use of the Trinity-Large-Base. Support call Trinity-Large-Base model through api, including Node.js, Python, http.
Trinity-Large-Base huggingface.co is an online trial and call api platform, which integrates Trinity-Large-Base's modeling effects, including api services, and provides a free online trial of Trinity-Large-Base, you can try Trinity-Large-Base online for free by clicking the link below.
arcee-ai Trinity-Large-Base online free url in huggingface.co:
Trinity-Large-Base is an open source model from GitHub that offers a free installation service, and any user can find Trinity-Large-Base on GitHub to install. At the same time, huggingface.co provides the effect of Trinity-Large-Base install, users can directly use Trinity-Large-Base installed effect in huggingface.co for debugging and trial. It also supports api for free installation.