Zigeng / R1-VeriThinker-7B

huggingface.co
Total runs: 827
24-hour runs: 0
7-day runs: -30
30-day runs: -256
Model's Last Updated: May 27 2025
text-generation

Introduction of R1-VeriThinker-7B

Model Details of R1-VeriThinker-7B

🔍 VeriThinker: Learning to Verify Makes Reasoning Model Efficient

VeriThinker: Learning to Verify Makes Reasoning Model Efficient
Zigeng Chen , Xinyin Ma , Gongfan Fang , Ruonan Yu , Xinchao Wang
xML Lab , National University of Singapore


The key distinction between VeriThinker and traditional SFT or RL-based long-to-short methods. We uniquely train LRMs on an auxiliary CoT verification task, achieving effective CoT compression without relying on synthetic target reasoning chains.

💻 GitHub Code Reposity
📄 Paper ArXiv-Link
🤖 Model R1-VeriThinker-7B
📊 Data CoT-Veirification-340k
📄 Paper (🤗) Hugging Face Paper
💡 Introduction

We introduce VeriThinker, a novel approach for CoT compression. Unlike conventional methods that fine-tune LRMs directly on the original reasoning task using synthetic concise CoT data, we innovatively fine-tune the model solely through an auxiliary verification task. By training LRMs to accurately verify the correctness of CoT solutions, the LRMs inherently become more discerning about the necessity of subsequent self-reflection steps, thereby effectively suppressing overthinking. Extensive experiments validate that VeriThinker substantially reduces reasoning chain lengths while maintaining or even slightly improving accuracy. When applied to DeepSeek-R1-Distill-Qwen-7B, our approach reduces reasoning tokens on MATH500 from 3790 to 2125 while improving accuracy by 0.8% (94.0% to 94.8%), and on AIME25, tokens decrease from 14321 to 10287 with a 2.1% accuracy gain (38.7% to 40.8%). Additionally, our experiments demonstrate that VeriThinker can also be zero-shot generalized to speculative reasoning to boost throughput.

🚀 Quick Start:
Reasoning Task:
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Zigeng/R1-VeriThinker-7B"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "Given a=4, b=7, please help me calculate a*2+b*3+(a+b)^2."
tail = r" Please reason step by step, and put your final answer within \boxed{}."
messages = [
    {"role": "user", "content": prompt + tail}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=16384,
    temperature=0.6,
    top_p=0.95,
)

generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)
Correctness Veirification Task:
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Zigeng/R1-VeriThinker-7B"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt_part_1 = "## Instruction:
You will be provided with a question along with a proposed solution. Please carefully verify each step of the solution, tell me if every step is absolutely correct.

" 


prompt_part_2 = """## Question:
{} Please reason step by step, and put your final answer within \\boxed{}.

## Proposed Solution:
{}
"""

question = "Given a=4, b=7, please help me calculate a*2+b*3+(a+b)^2."
cot_solution = """First, let's parse and calculate the given expression a*2+b*3+(a + b)^2 step by step, where a=4 and b=7.

Calculate a * 2: 4 * 2 = 8

Calculate b * 3: 7 * 3 = 21

Calculate (a + b)^2: (4 + 7)^2 = 11^2 = 121

Add the above results: 8 + 21 + 121 = 150

The final result is \\boxed{150}.
"""
prompt = prompt_part_1+prompt_part_2.format(question,"{}", cot_solution)

messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=64,
    do_sample=False
)

generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)
📖 Experimental Results
CoT Compression Results:

CoT Compression

CoT Correctness Verification Results:

CoT Correctness

Speculative Reasoning Results:

Speculative reasoning results on three reasoning models. When using Qwen-2.5-Math-Instruct-7B as the draft model, most problems in MATH500 and GSM8K can be solved with short CoT model, while only a few (around 10%) require activation of the long CoT model for more complex solutions. CoT Speculative1 CoT Speculative2

Citation

If our research assists your work, please give us a star ⭐ or cite us using:


Runs of Zigeng R1-VeriThinker-7B on huggingface.co

827
Total runs
0
24-hour runs
-30
3-day runs
-30
7-day runs
-256
30-day runs

More Information About R1-VeriThinker-7B huggingface.co Model

More R1-VeriThinker-7B license Visit here:

https://choosealicense.com/licenses/mit

R1-VeriThinker-7B huggingface.co

R1-VeriThinker-7B huggingface.co is an AI model on huggingface.co that provides R1-VeriThinker-7B's model effect (), which can be used instantly with this Zigeng R1-VeriThinker-7B model. huggingface.co supports a free trial of the R1-VeriThinker-7B model, and also provides paid use of the R1-VeriThinker-7B. Support call R1-VeriThinker-7B model through api, including Node.js, Python, http.

R1-VeriThinker-7B huggingface.co Url

https://huggingface.co/Zigeng/R1-VeriThinker-7B

Zigeng R1-VeriThinker-7B online free

R1-VeriThinker-7B huggingface.co is an online trial and call api platform, which integrates R1-VeriThinker-7B's modeling effects, including api services, and provides a free online trial of R1-VeriThinker-7B, you can try R1-VeriThinker-7B online for free by clicking the link below.

Zigeng R1-VeriThinker-7B online free url in huggingface.co:

https://huggingface.co/Zigeng/R1-VeriThinker-7B

R1-VeriThinker-7B install

R1-VeriThinker-7B is an open source model from GitHub that offers a free installation service, and any user can find R1-VeriThinker-7B on GitHub to install. At the same time, huggingface.co provides the effect of R1-VeriThinker-7B install, users can directly use R1-VeriThinker-7B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

R1-VeriThinker-7B install url in huggingface.co:

https://huggingface.co/Zigeng/R1-VeriThinker-7B

Url of R1-VeriThinker-7B

R1-VeriThinker-7B huggingface.co Url

Provider of R1-VeriThinker-7B huggingface.co

Zigeng
ORGANIZATIONS

Other API from Zigeng

huggingface.co

Total runs: 12.9K
Run Growth: 12.8K
Growth Rate: 100.00%
Updated:April 20 2026
huggingface.co

Total runs: 1.7K
Run Growth: 1.7K
Growth Rate: 100.00%
Updated:April 20 2026
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:December 19 2023
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:November 28 2024