tasksource / deberta-base-long-nli

huggingface.co
Total runs: 521
24-hour runs: 9
7-day runs: 64
30-day runs: 103
Model's Last Updated: October 04 2024
zero-shot-classification

Introduction of deberta-base-long-nli

Model Details of deberta-base-long-nli

Model Card for Model ID

deberta-v3-base with context length of 1280 fine-tuned on tasksource for 250k steps. I oversampled long NLI tasks (ConTRoL, doc-nli). Training data include helpsteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.

This model is suitable for long context NLI or as a backbone for reward models or classifiers fine-tuning.

This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:

  • Zero-shot entailment-based classification for arbitrary labels [ZS].
  • Natural language inference [NLI]
  • Hundreds of previous tasks with tasksource-adapters [TA].
  • Further fine-tuning on a new task or tasksource task (classification, token classification or multiple-choice) [FT].
dataset accuracy
anli/a1 63.3
anli/a2 47.2
anli/a3 49.4
nli_fever 79.4
FOLIO 61.8
ConTRoL-nli 63.3
cladder 71.1
zero-shot-label-nli 74.4
chatbot_arena_conversations 72.2
oasst2_pairwise_rlhf_reward 73.9
doc-nli 90.0

Zero-shot GPT-4 scores 61% on FOLIO (logical reasoning), 62% on cladder (probabilistic reasoning) and 56.4% on ConTRoL (long context NLI).

[ZS] Zero-shot classification pipeline

from transformers import pipeline
classifier = pipeline("zero-shot-classification",model="tasksource/deberta-base-long-nli")

text = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(text, candidate_labels)

NLI training data of this model includes label-nli , a NLI dataset specially constructed to improve this kind of zero-shot classification.

[NLI] Natural language inference pipeline

from transformers import pipeline
pipe = pipeline("text-classification",model="tasksource/deberta-base-long-nli")
pipe([dict(text='there is a cat',
  text_pair='there is a black cat')]) #list of (premise,hypothesis)
# [{'label': 'neutral', 'score': 0.9952911138534546}]

[TA] Tasksource-adapters: 1 line access to hundreds of tasks

# !pip install tasknet
import tasknet as tn
pipe = tn.load_pipeline('tasksource/deberta-base-long-nli','glue/sst2') # works for 500+ tasksource tasks
pipe(['That movie was great !', 'Awful movie.'])
# [{'label': 'positive', 'score': 0.9956}, {'label': 'negative', 'score': 0.9967}]

The list of tasks is available in model config.json. This is more efficient than ZS since it requires only one forward pass per example, but it is less flexible.

[FT] Tasknet: 3 lines fine-tuning

# !pip install tasknet
import tasknet as tn
hparams=dict(model_name='tasksource/deberta-base-long-nli', learning_rate=2e-5)
model, trainer = tn.Model_Trainer([tn.AutoTask("glue/rte")], hparams)
trainer.train()

Citation

More details on this article:

@inproceedings{sileo-2024-tasksource,
    title = "tasksource: A Large Collection of {NLP} tasks with a Structured Dataset Preprocessing Framework",
    author = "Sileo, Damien",
    editor = "Calzolari, Nicoletta  and
      Kan, Min-Yen  and
      Hoste, Veronique  and
      Lenci, Alessandro  and
      Sakti, Sakriani  and
      Xue, Nianwen",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.lrec-main.1361",
    pages = "15655--15684",
}

Runs of tasksource deberta-base-long-nli on huggingface.co

521
Total runs
9
24-hour runs
11
3-day runs
64
7-day runs
103
30-day runs

More Information About deberta-base-long-nli huggingface.co Model

More deberta-base-long-nli license Visit here:

https://choosealicense.com/licenses/apache-2.0

deberta-base-long-nli huggingface.co

deberta-base-long-nli huggingface.co is an AI model on huggingface.co that provides deberta-base-long-nli's model effect (), which can be used instantly with this tasksource deberta-base-long-nli model. huggingface.co supports a free trial of the deberta-base-long-nli model, and also provides paid use of the deberta-base-long-nli. Support call deberta-base-long-nli model through api, including Node.js, Python, http.

deberta-base-long-nli huggingface.co Url

https://huggingface.co/tasksource/deberta-base-long-nli

tasksource deberta-base-long-nli online free

deberta-base-long-nli huggingface.co is an online trial and call api platform, which integrates deberta-base-long-nli's modeling effects, including api services, and provides a free online trial of deberta-base-long-nli, you can try deberta-base-long-nli online for free by clicking the link below.

tasksource deberta-base-long-nli online free url in huggingface.co:

https://huggingface.co/tasksource/deberta-base-long-nli

deberta-base-long-nli install

deberta-base-long-nli is an open source model from GitHub that offers a free installation service, and any user can find deberta-base-long-nli on GitHub to install. At the same time, huggingface.co provides the effect of deberta-base-long-nli install, users can directly use deberta-base-long-nli installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

deberta-base-long-nli install url in huggingface.co:

https://huggingface.co/tasksource/deberta-base-long-nli

Url of deberta-base-long-nli

deberta-base-long-nli huggingface.co Url

Provider of deberta-base-long-nli huggingface.co

tasksource
ORGANIZATIONS

Other API from tasksource