Hy3 preview
is a 295B-parameter Mixture-of-Experts (MoE) model with 21B active parameters and 3.8B MTP layer parameters, developed by the Tencent Hy Team. Hy3 preview is the first model trained on our rebuilt infrastructure, and the strongest we've shipped so far. It improves significantly on complex reasoning, instruction following, context learning, coding, and agent tasks.
Property
Value
Architecture
Mixture-of-Experts (MoE)
Total Parameters
295B
Activated Parameters
21B
MTP Layer Parameters
3.8B
Number of Layers (excluding MTP layer)
80
Number of MTP Layers
1
Attention Heads
64 (GQA, 8 KV heads, head dim 128)
Hidden Size
4096
Intermediate Size
13312
Context Length
256K
Vocabulary Size
120832
Number of Experts
192 experts, top-8 activated
Supported Precisions
BF16
Highlights
STEM & Reasoning
— Complex reasoning underpins everything else. Hy3 preview performs well on challenging STEM benchmarks like FrontierScience-Olympiad and IMOAnswerBench, and achieved excellent results in the Tsinghua Qiuzhen College Math PhD qualifying exam (Spring '26) and the China High School Biology Olympiad (CHSBO 2025), demonstrating generalizable reasoning capacity.
Context Learning & Instruction Following
— Real-world tasks require the ability to parse messy, lengthy contexts and follow complex rules. We built CL-bench and CL-bench-Life from our own business scenarios to innovatively measure context learning ability. Hy3 preview exhibits solid gains in both context learning and instruction following capabilities.
Code & Agent
— Coding and agents saw the biggest gains. With a rebuilt RL infrastructure and larger-scale training tasks, we posted competitive scores across mainstream coding agent benchmarks (SWE-bench Verified, Terminal-Bench 2.0) and search agent benchmarks (BrowseComp, WideSearch).
Benchmark Results
Pre-trained Model Performance
Category
Benchmark (Metric)
# Shots
Kimi-K2 BASE
DeepSeek-V3 BASE
GLM-4.5 BASE
Hy3 preview-Base
#ActivatedParams
-
32B
37B
32B
21B
#TotalParams
-
1043B
671B
355B
295B
English
MMLU
5-shot
88.24
87.68
87.73
87.42
MMLU-Pro
5-shot
65.98
63.98
63.67
65.76
MMLU-Redux
5-shot
87.18
86.81
86.56
86.86
ARC-Challenge
0-shot
96.66
94.65
96.32
95.99
DROP
5-shot
86.40
86.50
82.90
85.50
PIQA
4-shot
84.93
84.22
84.71
84.39
SuperGPQA
5-shot
51.10
46.17
49.64
51.60
SimpleQA
5-shot
34.37
26.15
29.26
26.47
Code
MBPP-plus
3-shot
81.35
75.47
78.05
78.71
CRUXEval-I
3-shot
68.01
67.79
68.51
71.19
CRUXEval-O
3-shot
69.62
71.00
67.75
68.38
LiveCodeBench-v6
1-shot
30.86
29.31
27.43
34.86
Math
GSM8K
4-shot
93.46
88.15
90.06
95.37
MATH
4-shot
71.20
59.37
61.00
76.28
CMath
4-shot
90.83
85.50
89.33
91.17
Chinese
C-Eval
5-shot
91.51
90.35
85.84
89.80
CMMLU
5-shot
90.72
87.90
86.46
89.61
Chinese-simpleQA
5-shot
74.58
68.72
68.49
69.73
Multilingual
MMMLU
5-shot
77.63
79.54
79.26
80.15
INCLUDE
5-shot
75.66
77.86
76.27
78.64
Instruct Model Performance
STEM & Reasoning
Complex reasoning underpins everything else. Hy3 preview performs well on challenging STEM benchmarks like FrontierScience-Olympiad and IMOAnswerBench. It also achieved excellent results in the Tsinghua Qiuzhen College Math PhD qualifying exam (Spring '26) and the China High School Biology Olympiad (CHSBO 2025), demonstrating a high degree of generalizable reasoning capacity.
Context Learning & Instruction Following
Real-world tasks require the ability to parse messy, lengthy contexts and follow complex rules. We built CL-bench and CL-bench-Life from our own business scenarios to innovatively measure context learning ability. Hy3 preview exhibits solid gains in both context learning and instruction following capabilities.
Code & Agent
Coding and agents saw the biggest gains. With a rebuilt RL infrastructure and larger-scale training tasks, we posted competitive scores across mainstream coding agent benchmarks (SWE-bench Verified, Terminal-Bench 2.0) and search agent benchmarks (BrowseComp, WideSearch).
Coding is about whether a model can execute in a development environment. Search is about whether it can find and combine information from the open web. Both matter for complex agent scenarios like OpenClaw. Hy3 preview scores well on ClawEval and WildClawBench — a sign that its agent capabilities are becoming practical.
Beyond public benchmarks, we built internal evaluation sets to test the model in real development scenarios. On Hy-Backend (backend-focused tasks), Hy-Vibe Bench (real-user dev workflows), and Hy-SWE Max, Hy3 preview scores competitively against other open-source models.
Hy3 preview provides a complete model training pipeline, supporting both full fine-tuning and LoRA fine-tuning, with DeepSpeed ZeRO configurations and LLaMA-Factory integration.
For detailed training documentation, please refer to:
Training Guide
Quantization
We provide
AngelSlim
, a more accessible, comprehensive, and efficient toolkit for large model compression. AngelSlim supports a comprehensive suite of compression tools for large-scale multimodal models, including common quantization algorithms, low-bit quantization, and speculative sampling.
License
Hy3 preview is released under the
Tencent Hy Community License Agreement
. See
LICENSE
for details.
Contact Us
If you would like to leave a message for our R&D and product teams, welcome to contact us. You can also reach us via email:
Hy3-preview-Base huggingface.co is an AI model on huggingface.co that provides Hy3-preview-Base's model effect (), which can be used instantly with this tencent Hy3-preview-Base model. huggingface.co supports a free trial of the Hy3-preview-Base model, and also provides paid use of the Hy3-preview-Base. Support call Hy3-preview-Base model through api, including Node.js, Python, http.
Hy3-preview-Base huggingface.co is an online trial and call api platform, which integrates Hy3-preview-Base's modeling effects, including api services, and provides a free online trial of Hy3-preview-Base, you can try Hy3-preview-Base online for free by clicking the link below.
tencent Hy3-preview-Base online free url in huggingface.co:
Hy3-preview-Base is an open source model from GitHub that offers a free installation service, and any user can find Hy3-preview-Base on GitHub to install. At the same time, huggingface.co provides the effect of Hy3-preview-Base install, users can directly use Hy3-preview-Base installed effect in huggingface.co for debugging and trial. It also supports api for free installation.