bling-phi-3 is part of the BLING ("Best Little Instruct No-GPU") model series, RAG-instruct trained on top of a Microsoft Phi-3 base model.
Benchmark Tests
Evaluated against the benchmark test:
RAG-Instruct-Benchmark-Tester
1 Test Run (temperature=0.0, sample=False) with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--
Accuracy Score
:
99.5
correct out of 100
--Not Found Classification: 95.0%
--Boolean: 97.5%
--Math/Logic: 80.0%
--Complex Questions (1-5): 4 (Above Average - multiple-choice, causal)
--Summarization Quality (1-5): 4 (Above Average)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
Note: see also the quantized gguf version of the model-
bling-phi-3-gguf
.
Note: the Pytorch version answered 1 question with "Not Found" while the quantized version answered it correctly, hence the small difference in scores.
Model Description
Developed by:
llmware
Model type:
bling
Language(s) (NLP):
English
License:
Apache 2.0
Finetuned from model:
Microsoft Phi-3
Uses
The intended use of BLING models is two-fold:
Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow.
BLING models are fine-tuned on top of leading base foundation models, generally in the 1-3B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases.
Direct Use
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.
BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
Bias, Risks, and Limitations
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
How to Get Started with the Model
The fastest way to get started with BLING is through direct import in transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-phi-3", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("llmware/bling-phi-3", trust_remote_code=True)
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The
generation_test_llmware_script.py
includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
The BLING model was fine-tuned with a simple "<human> and <bot> wrapper", so to get the best results, wrap inference entries as:
(As an aside, we intended to retire "human-bot" and tried several variations of the new Microsoft Phi-3 prompt template and ultimately had slightly better results with the very simple "human-bot" separators, so we opted to keep them.)
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
Text Passage Context, and
Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
# prepare prompt packaging used in fine-tuning process
new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
inputs = tokenizer(new_prompt, return_tensors="pt")
start_of_output = len(inputs.input_ids[0])
# temperature: set at 0.0 with do_sample=False for consistency of output
# max_new_tokens: set at 100 - may prematurely stop a few of the summaries
outputs = model.generate(
inputs.input_ids.to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=False,
temperature=0.0,
max_new_tokens=100,
)
output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
Model Card Contact
Darren Oberst & llmware team
Runs of llmware bling-phi-3 on huggingface.co
10
Total runs
0
24-hour runs
0
3-day runs
-1
7-day runs
1
30-day runs
More Information About bling-phi-3 huggingface.co Model
bling-phi-3 huggingface.co is an AI model on huggingface.co that provides bling-phi-3's model effect (), which can be used instantly with this llmware bling-phi-3 model. huggingface.co supports a free trial of the bling-phi-3 model, and also provides paid use of the bling-phi-3. Support call bling-phi-3 model through api, including Node.js, Python, http.
bling-phi-3 huggingface.co is an online trial and call api platform, which integrates bling-phi-3's modeling effects, including api services, and provides a free online trial of bling-phi-3, you can try bling-phi-3 online for free by clicking the link below.
llmware bling-phi-3 online free url in huggingface.co:
bling-phi-3 is an open source model from GitHub that offers a free installation service, and any user can find bling-phi-3 on GitHub to install. At the same time, huggingface.co provides the effect of bling-phi-3 install, users can directly use bling-phi-3 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.