zeliang0426 / token

huggingface.co
Total runs: 40
24-hour runs: 0
7-day runs: 0
30-day runs: 0
Model's Last Updated: July 13 2025
text-generation

Introduction of token

Model Details of token

Model Card for token

This model is a fine-tuned version of None . It has been trained using TRL .

Quick start
from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zeliang0426/token", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Training procedure

Visualize in Weights & Biases

This model was trained with GRPO, a method introduced in DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models .

Framework versions
  • TRL: 0.20.0.dev0
  • Transformers: 4.53.1
  • Pytorch: 2.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1
Citations

Cite GRPO as:

@article{zhihong2024deepseekmath,
    title        = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
    author       = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
    year         = 2024,
    eprint       = {arXiv:2402.03300},
}

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}

Runs of zeliang0426 token on huggingface.co

40
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
0
30-day runs

More Information About token huggingface.co Model

token huggingface.co

token huggingface.co is an AI model on huggingface.co that provides token's model effect (), which can be used instantly with this zeliang0426 token model. huggingface.co supports a free trial of the token model, and also provides paid use of the token. Support call token model through api, including Node.js, Python, http.

zeliang0426 token online free

token huggingface.co is an online trial and call api platform, which integrates token's modeling effects, including api services, and provides a free online trial of token, you can try token online for free by clicking the link below.

zeliang0426 token online free url in huggingface.co:

https://huggingface.co/zeliang0426/token

token install

token is an open source model from GitHub that offers a free installation service, and any user can find token on GitHub to install. At the same time, huggingface.co provides the effect of token install, users can directly use token installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

token install url in huggingface.co:

https://huggingface.co/zeliang0426/token

Url of token

Provider of token huggingface.co

zeliang0426
ORGANIZATIONS

Other API from zeliang0426

huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:August 17 2025