from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoTokenizer
model_path = "your/folder/to/onnx_model"
ort_model = ORTModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
text = "I love to watch my favorite TV series."
response = translate(text, ort_model, max_new_tokens=64, do_sample=False)
print(response)
Using pipeline
from optimum.pipelines import pipeline
model_path = "your/folder/to/onnx_model"
pipe = pipeline("text-generation", model=model_path, accelerator="ort")
text = "I love to watch my favorite TV series."
response = pipe(text, max_new_tokens=64, do_sample=False, eos_token_id=2)
response
Runs of Mxode NanoTranslator-M2 on huggingface.co
6
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
0
30-day runs
More Information About NanoTranslator-M2 huggingface.co Model
NanoTranslator-M2 huggingface.co is an AI model on huggingface.co that provides NanoTranslator-M2's model effect (), which can be used instantly with this Mxode NanoTranslator-M2 model. huggingface.co supports a free trial of the NanoTranslator-M2 model, and also provides paid use of the NanoTranslator-M2. Support call NanoTranslator-M2 model through api, including Node.js, Python, http.
NanoTranslator-M2 huggingface.co is an online trial and call api platform, which integrates NanoTranslator-M2's modeling effects, including api services, and provides a free online trial of NanoTranslator-M2, you can try NanoTranslator-M2 online for free by clicking the link below.
Mxode NanoTranslator-M2 online free url in huggingface.co:
NanoTranslator-M2 is an open source model from GitHub that offers a free installation service, and any user can find NanoTranslator-M2 on GitHub to install. At the same time, huggingface.co provides the effect of NanoTranslator-M2 install, users can directly use NanoTranslator-M2 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.