Image drawn by GPT-4 DALL·E 3
TL;DR: Perhaps better than all existing models < 70B, in most quantitative evaluations...
CausalLM 14B - Fully Compatible with Meta LLaMA 2
Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization is fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
News: DPO ver. Rank #1 ~13B - SOTA model of its size on 🤗 Open LLM Leaderboard
Recent Updates:
DPO-α Version
outperforms Zephyr-β in MT-Bench
Friendly reminder: If your VRAM is insufficient, you should use the 7B model instead of the quantized version.
Compared to the quantized versions, the 7B version and the 14B version demonstrate a high level of consistency.
Caution:
Unofficial GPTQ and AWQ models may have issues as they use Wikitext for calibration, while this model has undergone considerable training on a synthesized Wikipedia conversation dataset.
It is not recommended to use any form of quantization, but rather to use smaller-sized models, as the 7B and 14B versions have high consistency. However, if you do use model quantization, please use GGUF.
This model was trained based on the model weights of Qwen (and LLaMA2 was used, yes, for calculating some initial weights), you may also need to comply with the commercial use restrictions of these two models depending on the situation. The training process utilized a model architecture that was identical to LLaMA2, using the same attention calculation method as the original MHA LLaMA2 models, and no additional scaling applied to the Rotary Positional Encoding (RoPE).
We manually curated a SFT dataset of 1.3B tokens for training, utilizing open source datasets from Hugging Face. For most of these sentences, we performed manual or synthetic rewrites and generated alternate language versions using larger language models. Additionally, we conducted augmented text training using carefully selected entries from Wikipedia, as well as featured entries from Fandom and filtered entries from Moegirlpedia. In order to strike a balance between efficiency and quality, 100% of the data used for training was synthetic data, no direct use of text from the internet or original texts from publicly available datasets was employed for fine-tuning.
The 7B version of the model is a distilled version of the 14B model, specifically designed for speculative sampling. Therefore, it is important to exercise caution when directly using the model, as it may produce hallucinations or unreliable outputs.
Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.
Bonus: The model underwent some fine-tuning on the prompt format introduced in LLaVA1.5 that is unrelated to image attention calculation. Therefore, aligning the ViT Projection module with frozen LM under visual instructions would enable rapid implementation of effective multimodal capabilities.
We are currently unable to produce accurate benchmark templates for non-QA tasks (languages other than English and Chinese). However, we will be working on other language versions of the QA-Task challenge in the near future.
Japanese Benchmark
Task
Version
Metric
Value
Stderr
jcommonsenseqa-1.1-0.6
1.1
acc
0.8213
±
0.0115
JCommonsenseQA benchmark result is very, very close to
Japanese Stable LM Gamma 7B (83.47)
, current SOTA Japanese LM. However, our model was not trained on a particularly large amount of text in Japanese. This seems to reflect the cross-language transferability of metalinguistics.
🤗 Open LLM Leaderboard
SOTA chat model of its size on 🤗 Open LLM Leaderboard.
Dec 3, 2023
DPO Version Rank
#1
non-base model, of its size on 🤗 Open LLM Leaderboard, outperforms
ALL
~13B chat models.
14B huggingface.co is an AI model on huggingface.co that provides 14B's model effect (), which can be used instantly with this CausalLM 14B model. huggingface.co supports a free trial of the 14B model, and also provides paid use of the 14B. Support call 14B model through api, including Node.js, Python, http.
14B huggingface.co is an online trial and call api platform, which integrates 14B's modeling effects, including api services, and provides a free online trial of 14B, you can try 14B online for free by clicking the link below.
14B is an open source model from GitHub that offers a free installation service, and any user can find 14B on GitHub to install. At the same time, huggingface.co provides the effect of 14B install, users can directly use 14B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.