from multitask_model import BertForSequenceClassification
from transformers import AutoTokenizer
import torch
model = BertForSequenceClassification.from_pretrained(
"shahrukhx01/bert-multitask-query-classifiers",
task_labels_map={"quora_keyword_pairs": 2, "spaadia_squad_pairs": 2},
)
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-multitask-query-classifiers")
Run inference on both Tasks
from multitask_model import BertForSequenceClassification
from transformers import AutoTokenizer
import torch
model = BertForSequenceClassification.from_pretrained(
"shahrukhx01/bert-multitask-query-classifiers",
task_labels_map={"quora_keyword_pairs": 2, "spaadia_squad_pairs": 2},
)
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-multitask-query-classifiers")
## Keyword vs Statement/Question Classifierinput = ["keyword query", "is this a keyword query?"]
task_name="quora_keyword_pairs"
sequence = tokenizer(input, padding=True, return_tensors="pt")['input_ids']
logits = model(sequence, task_name=task_name)[0]
predictions = torch.argmax(torch.softmax(logits, dim=1).detach().cpu(), axis=1)
forinput, prediction inzip(input, predictions):
print(f"task: {task_name}, input: {input} \n prediction=> {prediction}")
print()
## Statement vs Question Classifierinput = ["where is berlin?", "is this a keyword query?", "Berlin is in Germany."]
task_name="spaadia_squad_pairs"
sequence = tokenizer(input, padding=True, return_tensors="pt")['input_ids']
logits = model(sequence, task_name=task_name)[0]
predictions = torch.argmax(torch.softmax(logits, dim=1).detach().cpu(), axis=1)
forinput, prediction inzip(input, predictions):
print(f"task: {task_name}, input: {input} \n prediction=> {prediction}")
print()
Runs of shahrukhx01 bert-multitask-query-classifiers on huggingface.co
41
Total runs
0
24-hour runs
-2
3-day runs
-3
7-day runs
30
30-day runs
More Information About bert-multitask-query-classifiers huggingface.co Model
bert-multitask-query-classifiers huggingface.co
bert-multitask-query-classifiers huggingface.co is an AI model on huggingface.co that provides bert-multitask-query-classifiers's model effect (), which can be used instantly with this shahrukhx01 bert-multitask-query-classifiers model. huggingface.co supports a free trial of the bert-multitask-query-classifiers model, and also provides paid use of the bert-multitask-query-classifiers. Support call bert-multitask-query-classifiers model through api, including Node.js, Python, http.
bert-multitask-query-classifiers huggingface.co is an online trial and call api platform, which integrates bert-multitask-query-classifiers's modeling effects, including api services, and provides a free online trial of bert-multitask-query-classifiers, you can try bert-multitask-query-classifiers online for free by clicking the link below.
shahrukhx01 bert-multitask-query-classifiers online free url in huggingface.co:
bert-multitask-query-classifiers is an open source model from GitHub that offers a free installation service, and any user can find bert-multitask-query-classifiers on GitHub to install. At the same time, huggingface.co provides the effect of bert-multitask-query-classifiers install, users can directly use bert-multitask-query-classifiers installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
bert-multitask-query-classifiers install url in huggingface.co: