TinyLlama / TinyLlama_v1.1

huggingface.co
Total runs: 29.3K
24-hour runs: 0
7-day runs: -3.3K
30-day runs: -7.2K
Model's Last Updated: June 07 2024
text-generation

Introduction of TinyLlama_v1.1

Model Details of TinyLlama_v1.1

TinyLlama-1.1B-v1.1

We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.

Overview

In this project, rather than only training a single TinyLlama model, we first train TinyLlama on a corpus of 1.5 trillion tokens to obtain foundational language capabilities. Subsequently, we take this model and turn it into three different models by continual pre-training with three distinct data sampling. For a visual representation of this process, please refer to the figure below.

Overview

Pretraining

Due to these issues( bug1 , bug2 ). We try to retrain our TinyLlama to provide a better model. We train our model with 2T tokens and divided our pretraining into 3 different stages: 1) basic pretraining, 2) continual pretraining with specific domain, and 3) cooldown .

Basic pretraining

In this initial phase, we managed to train our model with only slimpajama to develop its commonsense reasoning capabilities. The model was trained with 1.5T tokens during this basic pretraining period. Since we used a cluster with 4 A100-40G per node and we only shard model weights within a node, we can only set the batch size to approximately 1.8M this time.

Continual pretraining with specific domain

We incorporated 3 different kinds of corpus during this pretraining, slimpajama (which is the same as the first phase), Math&Code (starcoder and proof pile), and Chinese (Skypile). This approach allowed us to develop three variant models with specialized capabilities.

At the begining ~6B tokens in this stage, we linearly increased the sampling proportion for the domain-specific corpus (excluding Slimpajama, as it remained unchanged compared with stage 1). This warmup sampling increasing strategy was designed to gradually adjust the distribution of the pretraining data, ensuring a more stable training process. After this sampling increasing stage, we continued pretraining the model with stable sampling strategy until reaching ~1.85T tokens.

Cooldown

Implementing a cooldown phase has become a crucial technique to achieve better model convergence at the end of pretraining. However, since we have already used cosine learning rate strategy at the beginning, it becomes challenging to alter the learning rate for cooldown like what MiniCPM or deepseek does. Therefore, we try to cool down with adjusting our batch size. Specifically, we increase our batch size from 1.8M to 7.2M while keeping the original cosine learning rate schedule during our cooldown stage.

Tinyllama model family

Following an extensive and detailed pretraining process. We are now releasing three specialized versions of our model:

  1. TinyLlama_v1.1 : The standard version, used for general purposes.
  2. TinyLlama_v1.1_Math&Code : Equipped with better ability for math and code.
  3. TinyLlama_v1.1_Chinese : Good understanding capacity for Chinese.
Data

Here we list our data distribution in each stage:

TinyLlama_v1.1
Corpus Basic pretraining Continual pretraining with specific domain Cooldown
Slimpajama 100.0 100.0 100.0
TinyLlama_v1.1_math_code
Corpus Basic pretraining Continual pretraining with specific domain Cooldown
Slimpajama 100.0 75.0 75.0
starcoder - 15.0 15.0
proof_pile - 10.0 10.0
TinyLlama_v1.1_chinese
orpus Basic pretraining Continual pretraining with specific domain Cooldown
Slimpajama 100.0 50.0 50.0
skypile - 50.0 50.0
How to use

You will need the transformers>=4.31 Do check the TinyLlama GitHub page for more information.

from transformers import AutoTokenizer
import transformers 
import torch
model = "TinyLlama/TinyLlama_v1.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

sequences = pipeline(
    'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.',
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    repetition_penalty=1.5,
    eos_token_id=tokenizer.eos_token_id,
    max_length=500,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")
Eval
Model Pretrain Tokens HellaSwag Obqa WinoGrande ARC_c ARC_e boolq piqa avg
Pythia-1.0B 300B 47.16 31.40 53.43 27.05 48.99 60.83 69.21 48.30
TinyLlama-1.1B-intermediate-step-1431k-3T 3T 59.20 36.00 59.12 30.12 55.25 57.83 73.29 52.99
TinyLlama-1.1B-v1.1 2T 61.47 36.80 59.43 32.68 55.47 55.99 73.56 53.63
TinyLlama-1.1B-v1_math_code 2T 60.80 36.40 60.22 33.87 55.20 57.09 72.69 53.75
TinyLlama-1.1B-v1.1_chinese 2T 58.23 35.20 59.27 31.40 55.35 61.41 73.01 53.41

Runs of TinyLlama TinyLlama_v1.1 on huggingface.co

29.3K
Total runs
0
24-hour runs
-1.2K
3-day runs
-3.3K
7-day runs
-7.2K
30-day runs

More Information About TinyLlama_v1.1 huggingface.co Model

More TinyLlama_v1.1 license Visit here:

https://choosealicense.com/licenses/apache-2.0

TinyLlama_v1.1 huggingface.co

TinyLlama_v1.1 huggingface.co is an AI model on huggingface.co that provides TinyLlama_v1.1's model effect (), which can be used instantly with this TinyLlama TinyLlama_v1.1 model. huggingface.co supports a free trial of the TinyLlama_v1.1 model, and also provides paid use of the TinyLlama_v1.1. Support call TinyLlama_v1.1 model through api, including Node.js, Python, http.

TinyLlama_v1.1 huggingface.co Url

https://huggingface.co/TinyLlama/TinyLlama_v1.1

TinyLlama TinyLlama_v1.1 online free

TinyLlama_v1.1 huggingface.co is an online trial and call api platform, which integrates TinyLlama_v1.1's modeling effects, including api services, and provides a free online trial of TinyLlama_v1.1, you can try TinyLlama_v1.1 online for free by clicking the link below.

TinyLlama TinyLlama_v1.1 online free url in huggingface.co:

https://huggingface.co/TinyLlama/TinyLlama_v1.1

TinyLlama_v1.1 install

TinyLlama_v1.1 is an open source model from GitHub that offers a free installation service, and any user can find TinyLlama_v1.1 on GitHub to install. At the same time, huggingface.co provides the effect of TinyLlama_v1.1 install, users can directly use TinyLlama_v1.1 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

TinyLlama_v1.1 install url in huggingface.co:

https://huggingface.co/TinyLlama/TinyLlama_v1.1

Url of TinyLlama_v1.1

TinyLlama_v1.1 huggingface.co Url

Provider of TinyLlama_v1.1 huggingface.co

TinyLlama
ORGANIZATIONS

Other API from TinyLlama