bigscience / mt0-large

huggingface.co
Total runs: 3.8K
24-hour runs: 4
7-day runs: 112
30-day runs: 2.0K
Model's Last Updated: September 26 2023
text-generation

Introduction of mt0-large

Model Details of mt0-large

xmtf

Table of Contents

  1. Model Summary
  2. Use
  3. Limitations
  4. Training
  5. Evaluation
  6. Citation

Model Summary

We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages.

Multitask finetuned on xP3 . Recommended for prompting in English.
Parameters 300M 580M 1.2B 3.7B 13B 560M 1.1B 1.7B 3B 7.1B 176B
Finetuned Model mt0-small mt0-base mt0-large mt0-xl mt0-xxl bloomz-560m bloomz-1b1 bloomz-1b7 bloomz-3b bloomz-7b1 bloomz
Multitask finetuned on xP3mt . Recommended for prompting in non-English.
Finetuned Model mt0-xxl-mt bloomz-7b1-mt bloomz-mt
Multitask finetuned on P3 . Released for research purposes only. Strictly inferior to above models!
Finetuned Model mt0-xxl-p3 bloomz-7b1-p3 bloomz-p3
Original pretrained checkpoints. Not recommended.
Pretrained Model mt5-small mt5-base mt5-large mt5-xl mt5-xxl bloom-560m bloom-1b1 bloom-1b7 bloom-3b bloom-7b1 bloom

Use

Intended use

We recommend using the model to perform tasks expressed in natural language. For example, given the prompt " Translate to English: Je t’aime. ", the model will most likely answer " I love you. ". Some prompt ideas from our paper:

  • 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
  • Suggest at least five related search terms to "Mạng neural nhân tạo".
  • Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
  • Explain in a sentence in Telugu what is backpropagation in neural networks.

Feel free to share your generations in the Community tab!

How to use
CPU
Click to expand
# pip install -q transformers
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

checkpoint = "bigscience/mt0-large"

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)

inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
GPU
Click to expand
# pip install -q transformers accelerate
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

checkpoint = "bigscience/mt0-large"

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")

inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
GPU in 8bit
Click to expand
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

checkpoint = "bigscience/mt0-large"

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)

inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Limitations

Prompt Engineering: The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt " Translate to English: Je t'aime " without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. " Translate to English: Je t'aime. ", " Translate to English: Je t'aime. Translation: " " What is "Je t'aime." in English? ", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. " Explain in a sentence in Telugu what is backpropagation in neural networks. ".

Training

Model
  • Architecture: Same as mt5-large , also refer to the config.json file
  • Finetuning steps: 25000
  • Finetuning tokens: 4.62 billion
  • Precision: bfloat16
Hardware
  • TPUs: TPUv4-64
Software

Evaluation

We refer to Table 7 from our paper & bigscience/evaluation-results for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.

Citation

@article{muennighoff2022crosslingual,
  title={Crosslingual generalization through multitask finetuning},
  author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
  journal={arXiv preprint arXiv:2211.01786},
  year={2022}
}

Runs of bigscience mt0-large on huggingface.co

3.8K
Total runs
4
24-hour runs
-82
3-day runs
112
7-day runs
2.0K
30-day runs

More Information About mt0-large huggingface.co Model

More mt0-large license Visit here:

https://choosealicense.com/licenses/apache-2.0

mt0-large huggingface.co

mt0-large huggingface.co is an AI model on huggingface.co that provides mt0-large's model effect (), which can be used instantly with this bigscience mt0-large model. huggingface.co supports a free trial of the mt0-large model, and also provides paid use of the mt0-large. Support call mt0-large model through api, including Node.js, Python, http.

bigscience mt0-large online free

mt0-large huggingface.co is an online trial and call api platform, which integrates mt0-large's modeling effects, including api services, and provides a free online trial of mt0-large, you can try mt0-large online for free by clicking the link below.

bigscience mt0-large online free url in huggingface.co:

https://huggingface.co/bigscience/mt0-large

mt0-large install

mt0-large is an open source model from GitHub that offers a free installation service, and any user can find mt0-large on GitHub to install. At the same time, huggingface.co provides the effect of mt0-large install, users can directly use mt0-large installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

mt0-large install url in huggingface.co:

https://huggingface.co/bigscience/mt0-large

Url of mt0-large

Provider of mt0-large huggingface.co

bigscience
ORGANIZATIONS

Other API from bigscience

huggingface.co

Total runs: 311.7K
Run Growth: 120.8K
Growth Rate: 38.76%
Updated:September 26 2023
huggingface.co

Total runs: 47.0K
Run Growth: -8.8K
Growth Rate: -18.77%
Updated:May 12 2023
huggingface.co

Total runs: 28.3K
Run Growth: -9.7K
Growth Rate: -34.26%
Updated:March 20 2024
huggingface.co

Total runs: 12.3K
Run Growth: 2.7K
Growth Rate: 21.97%
Updated:April 14 2023
huggingface.co

Total runs: 7.0K
Run Growth: -4.6K
Growth Rate: -65.71%
Updated:May 28 2023
huggingface.co

Total runs: 6.5K
Run Growth: -953
Growth Rate: -14.58%
Updated:July 29 2023
huggingface.co

Total runs: 6.3K
Run Growth: -4.2K
Growth Rate: -66.96%
Updated:January 03 2024
huggingface.co

Total runs: 5.3K
Run Growth: -435
Growth Rate: -8.16%
Updated:September 26 2023
huggingface.co

Total runs: 5.0K
Run Growth: 3.0K
Growth Rate: 60.23%
Updated:September 26 2023
huggingface.co

Total runs: 4.8K
Run Growth: 1.9K
Growth Rate: 40.51%
Updated:February 22 2024
huggingface.co

Total runs: 4.7K
Run Growth: 3.4K
Growth Rate: 72.27%
Updated:September 26 2023
huggingface.co

Total runs: 3.9K
Run Growth: 3.0K
Growth Rate: 76.94%
Updated:March 17 2024
huggingface.co

Total runs: 3.7K
Run Growth: 3.4K
Growth Rate: 93.28%
Updated:November 03 2024
huggingface.co

Total runs: 3.6K
Run Growth: 2.0K
Growth Rate: 55.67%
Updated:July 19 2024
huggingface.co

Total runs: 3.5K
Run Growth: 3.2K
Growth Rate: 90.04%
Updated:June 07 2024
huggingface.co

Total runs: 1.7K
Run Growth: -168
Growth Rate: -10.09%
Updated:March 09 2024
huggingface.co

Total runs: 510
Run Growth: 489
Growth Rate: 95.88%
Updated:March 13 2025
huggingface.co

Total runs: 54
Run Growth: -6
Growth Rate: -11.11%
Updated:October 27 2024
huggingface.co

Total runs: 29
Run Growth: 10
Growth Rate: 34.48%
Updated:May 22 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:October 04 2022