TechWolf / JobBERT-v3

huggingface.co
Total runs: 61.6K
24-hour runs: 0
7-day runs: 9.5K
30-day runs: 10.7K
Model's Last Updated: July 30 2025
feature-extraction

Introduction of JobBERT-v3

Model Details of JobBERT-v3

SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2

This is a sentence-transformers model specifically trained for job title matching and similarity. It's finetuned from sentence-transformers/paraphrase-multilingual-mpnet-base-v2 on a large dataset of job titles and their associated skills/requirements across multiple languages. The model maps English, Spanish, German and Chinese job titles and descriptions to a 1024-dimensional dense vector space and can be used for semantic job title matching, job similarity search, and related HR/recruitment tasks.

Model Details
Model Description
  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
  • Maximum Sequence Length: 64 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset: 4 x 5.2M high-quality job title - skills pairs in English, Spanish, German and Chinese
Model Sources
Full Model Architecture
SentenceTransformer(
  (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Asym(
    (anchor-0): Dense({'in_features': 768, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
    (positive-0): Dense({'in_features': 768, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
  )
)
Usage
Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load and use the model with the following code:

import torch
import numpy as np
from tqdm.auto import tqdm
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import batch_to_device, cos_sim

# Load the model
model = SentenceTransformer("TechWolf/JobBERT-v3")

def encode_batch(jobbert_model, texts):
    features = jobbert_model.tokenize(texts)
    features = batch_to_device(features, jobbert_model.device)
    features["text_keys"] = ["anchor"]
    with torch.no_grad():
        out_features = jobbert_model.forward(features)
    return out_features["sentence_embedding"].cpu().numpy()

def encode(jobbert_model, texts, batch_size: int = 8):
    # Sort texts by length and keep track of original indices
    sorted_indices = np.argsort([len(text) for text in texts])
    sorted_texts = [texts[i] for i in sorted_indices]
    
    embeddings = []
    
    # Encode in batches
    for i in tqdm(range(0, len(sorted_texts), batch_size)):
        batch = sorted_texts[i:i+batch_size]
        embeddings.append(encode_batch(jobbert_model, batch))
    
    # Concatenate embeddings and reorder to original indices
    sorted_embeddings = np.concatenate(embeddings)
    original_order = np.argsort(sorted_indices)
    return sorted_embeddings[original_order]

# Example usage
job_titles = [
    'Software Engineer',
    '高级软件开发人员',  # senior software developer
    'Produktmanager',  # product manager
    'Científica de datos'  # data scientist
]

# Get embeddings
embeddings = encode(model, job_titles)

# Calculate cosine similarity matrix
similarities = cos_sim(embeddings, embeddings)
print(similarities)

The output will be a similarity matrix where each value represents the cosine similarity between two job titles:

tensor([[1.0000, 0.8087, 0.4673, 0.5669],
        [0.8087, 1.0000, 0.4428, 0.4968],
        [0.4673, 0.4428, 1.0000, 0.4292],
        [0.5669, 0.4968, 0.4292, 1.0000]])
Training Details
Training Dataset
Unnamed Dataset
  • Size: 21,123,868 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 4 tokens
    • mean: 10.56 tokens
    • max: 38 tokens
    • min: 19 tokens
    • mean: 61.08 tokens
    • max: 64 tokens
  • Samples:
    anchor positive
    通信与培训专员 deliver online training, liaise with educational support staff, interact with an audience, construct individual learning plans, lead a team, develop corporate training programmes, learning technologies, communication, identify with the company's goals, address an audience, learning management systems, use presentation software, motivate others, provide learning support, engage with stakeholders, identify skills gaps, meet expectations of target audience, develop training programmes
    Associate Infrastructure Engineer create solutions to problems, design user interface, cloud technologies, use databases, automate cloud tasks, keep up-to-date to computer trends, work in teams, use object-oriented programming, keep updated on innovations in various business fields, design principles, Angular, adapt to changing situations, JavaScript, Agile development, manage stable, Swift (computer programming), keep up-to-date to design industry trends, monitor technology trends, web programming, provide mentorship, advise on efficiency improvements, adapt to change, JavaScript Framework, database management systems, stimulate creative processes
    客户顾问/出纳 customer service, handle financial transactions, adapt to changing situations, have computer literacy, manage cash desk, attend to detail, provide customer guidance on product selection, perform multiple tasks at the same time, carry out financial transactions, provide membership service, manage accounts, adapt to change, identify customer's needs, solve problems
  • Loss: CachedMultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "mini_batch_size": 512
    }
    
Training Hyperparameters
Non-Default Hyperparameters
  • overwrite_output_dir : True
  • per_device_train_batch_size : 2048
  • per_device_eval_batch_size : 2048
  • num_train_epochs : 1
  • fp16 : True
All Hyperparameters
Click to expand
  • overwrite_output_dir : True
  • do_predict : False
  • eval_strategy : no
  • prediction_loss_only : True
  • per_device_train_batch_size : 2048
  • per_device_eval_batch_size : 2048
  • per_gpu_train_batch_size : None
  • per_gpu_eval_batch_size : None
  • gradient_accumulation_steps : 1
  • eval_accumulation_steps : None
  • torch_empty_cache_steps : None
  • learning_rate : 5e-05
  • weight_decay : 0.0
  • adam_beta1 : 0.9
  • adam_beta2 : 0.999
  • adam_epsilon : 1e-08
  • max_grad_norm : 1.0
  • num_train_epochs : 1
  • max_steps : -1
  • lr_scheduler_type : linear
  • lr_scheduler_kwargs : {}
  • warmup_ratio : 0.0
  • warmup_steps : 0
  • log_level : passive
  • log_level_replica : warning
  • log_on_each_node : True
  • logging_nan_inf_filter : True
  • save_safetensors : True
  • save_on_each_node : False
  • save_only_model : False
  • restore_callback_states_from_checkpoint : False
  • no_cuda : False
  • use_cpu : False
  • use_mps_device : False
  • seed : 42
  • data_seed : None
  • jit_mode_eval : False
  • use_ipex : False
  • bf16 : False
  • fp16 : True
  • fp16_opt_level : O1
  • half_precision_backend : auto
  • bf16_full_eval : False
  • fp16_full_eval : False
  • tf32 : None
  • local_rank : 0
  • ddp_backend : None
  • tpu_num_cores : None
  • tpu_metrics_debug : False
  • debug : []
  • dataloader_drop_last : False
  • dataloader_num_workers : 0
  • dataloader_prefetch_factor : None
  • past_index : -1
  • disable_tqdm : False
  • remove_unused_columns : True
  • label_names : None
  • load_best_model_at_end : False
  • ignore_data_skip : False
  • fsdp : []
  • fsdp_min_num_params : 0
  • fsdp_config : {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap : None
  • accelerator_config : {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed : None
  • label_smoothing_factor : 0.0
  • optim : adamw_torch
  • optim_args : None
  • adafactor : False
  • group_by_length : False
  • length_column_name : length
  • ddp_find_unused_parameters : None
  • ddp_bucket_cap_mb : None
  • ddp_broadcast_buffers : False
  • dataloader_pin_memory : True
  • dataloader_persistent_workers : False
  • skip_memory_metrics : True
  • use_legacy_prediction_loop : False
  • push_to_hub : False
  • resume_from_checkpoint : None
  • hub_model_id : None
  • hub_strategy : every_save
  • hub_private_repo : None
  • hub_always_push : False
  • gradient_checkpointing : False
  • gradient_checkpointing_kwargs : None
  • include_inputs_for_metrics : False
  • include_for_metrics : []
  • eval_do_concat_batches : True
  • fp16_backend : auto
  • push_to_hub_model_id : None
  • push_to_hub_organization : None
  • mp_parameters :
  • auto_find_batch_size : False
  • full_determinism : False
  • torchdynamo : None
  • ray_scope : last
  • ddp_timeout : 1800
  • torch_compile : False
  • torch_compile_backend : None
  • torch_compile_mode : None
  • dispatch_batches : None
  • split_batches : None
  • include_tokens_per_second : False
  • include_num_input_tokens_seen : False
  • neftune_noise_alpha : None
  • optim_target_modules : None
  • batch_eval_metrics : False
  • eval_on_start : False
  • use_liger_kernel : False
  • eval_use_gather_object : False
  • average_tokens_across_devices : False
  • prompts : None
  • batch_sampler : batch_sampler
  • multi_dataset_batch_sampler : proportional
Training Logs
Epoch Step Training Loss
0.0485 500 3.89
0.0969 1000 3.373
0.1454 1500 3.1715
0.1939 2000 3.0414
0.2424 2500 2.9462
0.2908 3000 2.8691
0.3393 3500 2.8048
0.3878 4000 2.7501
0.4363 4500 2.7026
0.4847 5000 2.6601
0.5332 5500 2.6247
0.5817 6000 2.5951
0.6302 6500 2.5692
0.6786 7000 2.5447
0.7271 7500 2.5221
0.7756 8000 2.5026
0.8240 8500 2.4912
0.8725 9000 2.4732
0.9210 9500 2.4608
0.9695 10000 2.4548
Environmental Impact

Carbon emissions were measured using CodeCarbon .

  • Energy Consumed : 1.944 kWh
  • Carbon Emitted : 0.717 kg of CO2
  • Hours Used : 5.34 hours
Training Hardware
  • On Cloud : Yes
  • GPU Model : 1 x NVIDIA A100-SXM4-40GB
  • CPU Model : Intel(R) Xeon(R) CPU @ 2.20GHz
  • RAM Size : 83.48 GB
Framework Versions
  • Python: 3.10.16
  • Sentence Transformers: 4.1.0
  • Transformers: 4.48.3
  • PyTorch: 2.6.0+cu126
  • Accelerate: 1.3.0
  • Datasets: 3.5.1
  • Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
CachedMultipleNegativesRankingLoss
@misc{gao2021scaling,
    title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
    author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
    year={2021},
    eprint={2101.06983},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Runs of TechWolf JobBERT-v3 on huggingface.co

61.6K
Total runs
0
24-hour runs
1.7K
3-day runs
9.5K
7-day runs
10.7K
30-day runs

More Information About JobBERT-v3 huggingface.co Model

More JobBERT-v3 license Visit here:

https://choosealicense.com/licenses/mit

JobBERT-v3 huggingface.co

JobBERT-v3 huggingface.co is an AI model on huggingface.co that provides JobBERT-v3's model effect (), which can be used instantly with this TechWolf JobBERT-v3 model. huggingface.co supports a free trial of the JobBERT-v3 model, and also provides paid use of the JobBERT-v3. Support call JobBERT-v3 model through api, including Node.js, Python, http.

TechWolf JobBERT-v3 online free

JobBERT-v3 huggingface.co is an online trial and call api platform, which integrates JobBERT-v3's modeling effects, including api services, and provides a free online trial of JobBERT-v3, you can try JobBERT-v3 online for free by clicking the link below.

TechWolf JobBERT-v3 online free url in huggingface.co:

https://huggingface.co/TechWolf/JobBERT-v3

JobBERT-v3 install

JobBERT-v3 is an open source model from GitHub that offers a free installation service, and any user can find JobBERT-v3 on GitHub to install. At the same time, huggingface.co provides the effect of JobBERT-v3 install, users can directly use JobBERT-v3 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

JobBERT-v3 install url in huggingface.co:

https://huggingface.co/TechWolf/JobBERT-v3

Url of JobBERT-v3

JobBERT-v3 huggingface.co Url

Provider of JobBERT-v3 huggingface.co

TechWolf
ORGANIZATIONS

Other API from TechWolf

huggingface.co

Total runs: 13.8K
Run Growth: -36.9K
Growth Rate: -268.65%
Updated:October 23 2025