Mxode / NanoTranslator-M2

huggingface.co
Total runs: 6
24-hour runs: 0
7-day runs: 0
30-day runs: 0
Model's Last Updated: September 14 2024
translation

Introduction of NanoTranslator-M2

Model Details of NanoTranslator-M2

NanoTranslator-M2

English | 简体中文

Introduction

This is the medium-2 model of the NanoTranslator, currently supported only in English to Chinese .

The ONNX version of the model is also available in the repository.

All models are collected in the NanoTranslator Collection .

P. Arch. Act. V. H. I. L. A.H. K.H. Tie
XXL 100 LLaMA SwiGLU 16000 768 4096 8 24 8 True
XL 78 LLaMA GeGLU 16000 768 4096 6 24 8 True
L 49 LLaMA GeGLU 16000 512 2816 8 16 8 True
M2 22 Qwen2 GeGLU 4000 432 2304 6 24 8 True
M 22 LLaMA SwiGLU 8000 256 1408 16 16 4 True
S 9 LLaMA SwiGLU 4000 168 896 16 12 4 True
XS 2 LLaMA SwiGLU 2000 96 512 12 12 4 True
  • P. - Parameters (in million)
  • V. - vocab size
  • H. - hidden size
  • I. - intermediate size
  • L. - num layers
  • A.H. - num attention heads
  • K.H. - num kv heads
  • Tie - tie word embeddings
How to use

Prompt format as follows:

<|im_start|> {English Text} <|endoftext|>
Directly using transformers
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_path = 'Mxode/NanoTranslator-M2'

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)

def translate(text: str, model, **kwargs):
    generation_args = dict(
        max_new_tokens = kwargs.pop("max_new_tokens", 512),
        do_sample = kwargs.pop("do_sample", True),
        temperature = kwargs.pop("temperature", 0.55),
        top_p = kwargs.pop("top_p", 0.8),
        top_k = kwargs.pop("top_k", 40),
        eos_token_id = kwargs.pop("eos_token_id", tokenizer.eos_token_id),
        **kwargs
    )

    prompt = "<|im_start|>" + text + "<|endoftext|>"
    model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)

    generated_ids = model.generate(model_inputs.input_ids, **generation_args)
    generated_ids = [
        output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
    ]

    response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
    return response

text = "I love to watch my favorite TV series."

response = translate(text, model, max_new_tokens=64, do_sample=False)
print(response)
ONNX

It has been measured that reasoning with ONNX models will be 2-10 times faster than reasoning directly with transformers models.

You should switch to onnx branch manually and download to local.

reference docs:

Using ORTModelForCausalLM

from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoTokenizer

model_path = "your/folder/to/onnx_model"

ort_model = ORTModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)

text = "I love to watch my favorite TV series."

response = translate(text, ort_model, max_new_tokens=64, do_sample=False)
print(response)

Using pipeline

from optimum.pipelines import pipeline

model_path = "your/folder/to/onnx_model"
pipe = pipeline("text-generation", model=model_path, accelerator="ort")

text = "I love to watch my favorite TV series."

response = pipe(text, max_new_tokens=64, do_sample=False, eos_token_id=2)
response

Runs of Mxode NanoTranslator-M2 on huggingface.co

6
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
0
30-day runs

More Information About NanoTranslator-M2 huggingface.co Model

More NanoTranslator-M2 license Visit here:

https://choosealicense.com/licenses/gpl-3.0

NanoTranslator-M2 huggingface.co

NanoTranslator-M2 huggingface.co is an AI model on huggingface.co that provides NanoTranslator-M2's model effect (), which can be used instantly with this Mxode NanoTranslator-M2 model. huggingface.co supports a free trial of the NanoTranslator-M2 model, and also provides paid use of the NanoTranslator-M2. Support call NanoTranslator-M2 model through api, including Node.js, Python, http.

NanoTranslator-M2 huggingface.co Url

https://huggingface.co/Mxode/NanoTranslator-M2

Mxode NanoTranslator-M2 online free

NanoTranslator-M2 huggingface.co is an online trial and call api platform, which integrates NanoTranslator-M2's modeling effects, including api services, and provides a free online trial of NanoTranslator-M2, you can try NanoTranslator-M2 online for free by clicking the link below.

Mxode NanoTranslator-M2 online free url in huggingface.co:

https://huggingface.co/Mxode/NanoTranslator-M2

NanoTranslator-M2 install

NanoTranslator-M2 is an open source model from GitHub that offers a free installation service, and any user can find NanoTranslator-M2 on GitHub to install. At the same time, huggingface.co provides the effect of NanoTranslator-M2 install, users can directly use NanoTranslator-M2 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

NanoTranslator-M2 install url in huggingface.co:

https://huggingface.co/Mxode/NanoTranslator-M2

Url of NanoTranslator-M2

NanoTranslator-M2 huggingface.co Url

Provider of NanoTranslator-M2 huggingface.co

Mxode
ORGANIZATIONS

Other API from Mxode