Instruction-Data-Guard is a deep-learning classification model that helps identify LLM poisoning attacks in datasets.
It is trained on an instruction:response dataset and LLM poisoning attacks of such data.
Note that optimal use for Instruction-Data-Guard is for instruction:response datasets.
Input Type(s):
Text Embeddings
Input Format(s):
Numerical Vectors
Input Parameters:
1D Vectors
Other Properties Related to Input:
The text embeddings are generated from the
Aegis Defensive Model
. The length of the vectors is 4096.
Output:
Output Type(s):
Classification Scores
Output Format:
Array of shape 1
Output Parameters:
1D
Other Properties Related to Output:
Classification scores represent the confidence that the input data is poisoned or not.
The data used to train this model contained synthetically-generated LLM poisoning attacks.
Evaluation Benchmarks:
Instruction-Data-Guard is evaluated based on two overarching criteria:
Success on identifying LLM poisoning attacks, after the model was trained on examples of the attacks.
Success on identifying LLM poisoning attacks, but without training on examples of those attacks, at all.
Success is defined as having an acceptable catch rate (recall scores for each attack) over a high specificity score (ex. 95%). Acceptable catch rates need to be high enough to identify at least several poisoned records in the attack.
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns
here
.
Runs of nvidia instruction-data-guard on huggingface.co
1.3K
Total runs
0
24-hour runs
74
3-day runs
108
7-day runs
718
30-day runs
More Information About instruction-data-guard huggingface.co Model
instruction-data-guard huggingface.co is an AI model on huggingface.co that provides instruction-data-guard's model effect (), which can be used instantly with this nvidia instruction-data-guard model. huggingface.co supports a free trial of the instruction-data-guard model, and also provides paid use of the instruction-data-guard. Support call instruction-data-guard model through api, including Node.js, Python, http.
instruction-data-guard huggingface.co is an online trial and call api platform, which integrates instruction-data-guard's modeling effects, including api services, and provides a free online trial of instruction-data-guard, you can try instruction-data-guard online for free by clicking the link below.
nvidia instruction-data-guard online free url in huggingface.co:
instruction-data-guard is an open source model from GitHub that offers a free installation service, and any user can find instruction-data-guard on GitHub to install. At the same time, huggingface.co provides the effect of instruction-data-guard install, users can directly use instruction-data-guard installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
instruction-data-guard install url in huggingface.co: