The model maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. The easiest way is to simply measure the cosine distance between two sentences. Sentences that are close to each other in meaning, will have a small cosine distance and a similarity close to 1. The model is trained in such a way that similar sentences in different languages should also be close to each other. Ideally, an English-Norwegian sentence pair should have high similarity.
This release is a
non-generative encoder model
whose outputs are vectors/scores rather than language or media. Its intended functionality is limited to representation, retrieval, ranking, or classification support. On that basis, the release is preliminarily assessed as not falling within the provider obligations for GPAI models under the EU AI Act definitions, subject to legal confirmation if capability scope or marketed generality changes. For more information, see the Model Documentation Form
here
.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("NbAiLab/nb-sbert-v2-base")
# Run inference
sentences = [
"This is a Norwegian boy",
"Dette er en norsk gutt"
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# (2, 768)# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.8287],# [0.8287, 1.0000]])
Direct Usage (Transformers)
Without
sentence-transformers
, you can still use the model. First, you pass in your input through the transformer model, then you have to apply the right pooling-operation on top of the contextualized word embeddings.
Click to see the direct usage in Transformers
import torch
from sklearn.metrics.pairwise import cosine_similarity
from transformers import AutoTokenizer, AutoModel
#Mean Pooling - Take attention mask into account for correct averagingdefmean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["This is a Norwegian boy", "Dette er en norsk gutt"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('NbAiLab/nb-sbert-v2-base')
model = AutoModel.from_pretrained('NbAiLab/nb-sbert-v2-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddingswith torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print(embeddings.shape)
# torch.Size([2, 768])
similarity = cosine_similarity(embeddings[0].reshape(1, -1), embeddings[1].reshape(1, -1))
print(similarity)
# This should give 0.8287 in the example above.
Approximate statistics based on the first 1000 samples:
anchor
positive
negative
type
string
string
string
details
min: 5 tokens
mean: 20.91 tokens
max: 130 tokens
min: 5 tokens
mean: 20.91 tokens
max: 130 tokens
min: 5 tokens
mean: 14.14 tokens
max: 39 tokens
Samples:
anchor
positive
negative
Det som følger er mindre en glid nedover en glatt skråning enn et profesjonelt skred som resulterer i enten en oppsigelse eller en smal flukt til neste drømmejobb, der, selvfølgelig, syklusen gjentas igjen.
Syklusen gjentar seg ved neste jobb.
Syklusen gjentar seg sjelden ved neste jobb.
Syklusen gjentar seg ved neste jobb.
Det som følger er mindre en glid nedover en glatt skråning enn et profesjonelt skred som resulterer i enten en oppsigelse eller en smal flukt til neste drømmejobb, der, selvfølgelig, syklusen gjentas igjen.
Syklusen gjentar seg sjelden ved neste jobb.
The public areas are spectacular, the rooms a bit less so, but a long-awaited renovation was carried out in 1998.
The rooms are nice, but the public area is in a league of it's own.
The public area was fine, but the rooms were really something else.
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
NbAiLab/nb-bert-base
@inproceedings{kummervold-etal-2021-operationalizing,
title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
author = {Kummervold, Per E and
De la Rosa, Javier and
Wetjen, Freddy and
Brygfjeld, Svein Arne},
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
year = {2021},
address = {Reykjavik, Iceland (Online)},
publisher = {Linköping University Electronic Press, Sweden},
url = {https://huggingface.co/papers/2104.09617},
pages = {20--29},
abstract = {In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library. The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models in several token and sequence classification tasks for both Norwegian Bokmål and Norwegian Nynorsk. Our model also improves the mBERT performance for other languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore, we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.},
}
Citing & Authors
The model was trained by Victoria Handford and Lucas Georges Gabriel Charpentier. The documentation was initially autogenerated by the SentenceTransformers library then revised by Victoria Handford, Lucas Georges Gabriel Charpentier, and Javier de la Rosa.
Runs of NbAiLab nb-sbert-v2-base on huggingface.co
67
Total runs
0
24-hour runs
8
3-day runs
30
7-day runs
30
30-day runs
More Information About nb-sbert-v2-base huggingface.co Model
nb-sbert-v2-base huggingface.co is an AI model on huggingface.co that provides nb-sbert-v2-base's model effect (), which can be used instantly with this NbAiLab nb-sbert-v2-base model. huggingface.co supports a free trial of the nb-sbert-v2-base model, and also provides paid use of the nb-sbert-v2-base. Support call nb-sbert-v2-base model through api, including Node.js, Python, http.
nb-sbert-v2-base huggingface.co is an online trial and call api platform, which integrates nb-sbert-v2-base's modeling effects, including api services, and provides a free online trial of nb-sbert-v2-base, you can try nb-sbert-v2-base online for free by clicking the link below.
NbAiLab nb-sbert-v2-base online free url in huggingface.co:
nb-sbert-v2-base is an open source model from GitHub that offers a free installation service, and any user can find nb-sbert-v2-base on GitHub to install. At the same time, huggingface.co provides the effect of nb-sbert-v2-base install, users can directly use nb-sbert-v2-base installed effect in huggingface.co for debugging and trial. It also supports api for free installation.