amentaphd / test2308

huggingface.co
Total runs: 8
24-hour runs: 0
7-day runs: 0
30-day runs: 0
Model's Last Updated: April 09 2025
sentence-similarity

Introduction of test2308

Model Details of test2308

SentenceTransformer based on Snowflake/snowflake-arctic-embed-m

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m . It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details
Model Description
  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-m
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
Model Sources
Full Model Architecture
SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'The weather is lovely today.',
    "It's so sunny outside!",
    'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
Metric Value
cosine_accuracy@1 1.0
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 1.0
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 1.0
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 1.0
cosine_mrr@10 1.0
cosine_map@100 1.0
Training Details
Training Dataset
Unnamed Dataset
  • Size: 1 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 27 tokens
    • mean: 27.0 tokens
    • max: 27 tokens
    • min: 65 tokens
    • mean: 65.0 tokens
    • max: 65 tokens
  • Samples:
    sentence_0 sentence_1
    QUESTION #1: What action must potential registrants take if they fail to reach an agreement with previous registrants? 5.

    If there is failure to reach such an agreement, the potential registrant(s) shall inform the Agency and the previous registrant(s) thereof at the earliest one month after receipt, from the Agency, of the name and address of the previous registrant(s).

    6.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    
Training Hyperparameters
Non-Default Hyperparameters
  • eval_strategy : steps
  • per_device_train_batch_size : 2
  • per_device_eval_batch_size : 2
  • num_train_epochs : 1
  • multi_dataset_batch_sampler : round_robin
All Hyperparameters
Click to expand
  • overwrite_output_dir : False
  • do_predict : False
  • eval_strategy : steps
  • prediction_loss_only : True
  • per_device_train_batch_size : 2
  • per_device_eval_batch_size : 2
  • per_gpu_train_batch_size : None
  • per_gpu_eval_batch_size : None
  • gradient_accumulation_steps : 1
  • eval_accumulation_steps : None
  • torch_empty_cache_steps : None
  • learning_rate : 5e-05
  • weight_decay : 0.0
  • adam_beta1 : 0.9
  • adam_beta2 : 0.999
  • adam_epsilon : 1e-08
  • max_grad_norm : 1
  • num_train_epochs : 1
  • max_steps : -1
  • lr_scheduler_type : linear
  • lr_scheduler_kwargs : {}
  • warmup_ratio : 0.0
  • warmup_steps : 0
  • log_level : passive
  • log_level_replica : warning
  • log_on_each_node : True
  • logging_nan_inf_filter : True
  • save_safetensors : True
  • save_on_each_node : False
  • save_only_model : False
  • restore_callback_states_from_checkpoint : False
  • no_cuda : False
  • use_cpu : False
  • use_mps_device : False
  • seed : 42
  • data_seed : None
  • jit_mode_eval : False
  • use_ipex : False
  • bf16 : False
  • fp16 : False
  • fp16_opt_level : O1
  • half_precision_backend : auto
  • bf16_full_eval : False
  • fp16_full_eval : False
  • tf32 : None
  • local_rank : 0
  • ddp_backend : None
  • tpu_num_cores : None
  • tpu_metrics_debug : False
  • debug : []
  • dataloader_drop_last : False
  • dataloader_num_workers : 0
  • dataloader_prefetch_factor : None
  • past_index : -1
  • disable_tqdm : False
  • remove_unused_columns : True
  • label_names : None
  • load_best_model_at_end : False
  • ignore_data_skip : False
  • fsdp : []
  • fsdp_min_num_params : 0
  • fsdp_config : {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap : None
  • accelerator_config : {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed : None
  • label_smoothing_factor : 0.0
  • optim : adamw_torch
  • optim_args : None
  • adafactor : False
  • group_by_length : False
  • length_column_name : length
  • ddp_find_unused_parameters : None
  • ddp_bucket_cap_mb : None
  • ddp_broadcast_buffers : False
  • dataloader_pin_memory : True
  • dataloader_persistent_workers : False
  • skip_memory_metrics : True
  • use_legacy_prediction_loop : False
  • push_to_hub : False
  • resume_from_checkpoint : None
  • hub_model_id : None
  • hub_strategy : every_save
  • hub_private_repo : None
  • hub_always_push : False
  • gradient_checkpointing : False
  • gradient_checkpointing_kwargs : None
  • include_inputs_for_metrics : False
  • include_for_metrics : []
  • eval_do_concat_batches : True
  • fp16_backend : auto
  • push_to_hub_model_id : None
  • push_to_hub_organization : None
  • mp_parameters :
  • auto_find_batch_size : False
  • full_determinism : False
  • torchdynamo : None
  • ray_scope : last
  • ddp_timeout : 1800
  • torch_compile : False
  • torch_compile_backend : None
  • torch_compile_mode : None
  • dispatch_batches : None
  • split_batches : None
  • include_tokens_per_second : False
  • include_num_input_tokens_seen : False
  • neftune_noise_alpha : None
  • optim_target_modules : None
  • batch_eval_metrics : False
  • eval_on_start : False
  • use_liger_kernel : False
  • eval_use_gather_object : False
  • average_tokens_across_devices : False
  • prompts : None
  • batch_sampler : batch_sampler
  • multi_dataset_batch_sampler : round_robin
Training Logs
Epoch Step cosine_ndcg@10
1.0 1 1.0
Framework Versions
  • Python: 3.12.7
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.6.0+cpu
  • Accelerate: 0.26.0
  • Datasets: 3.4.1
  • Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Runs of amentaphd test2308 on huggingface.co

8
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
0
30-day runs

More Information About test2308 huggingface.co Model

test2308 huggingface.co

test2308 huggingface.co is an AI model on huggingface.co that provides test2308's model effect (), which can be used instantly with this amentaphd test2308 model. huggingface.co supports a free trial of the test2308 model, and also provides paid use of the test2308. Support call test2308 model through api, including Node.js, Python, http.

amentaphd test2308 online free

test2308 huggingface.co is an online trial and call api platform, which integrates test2308's modeling effects, including api services, and provides a free online trial of test2308, you can try test2308 online for free by clicking the link below.

amentaphd test2308 online free url in huggingface.co:

https://huggingface.co/amentaphd/test2308

test2308 install

test2308 is an open source model from GitHub that offers a free installation service, and any user can find test2308 on GitHub to install. At the same time, huggingface.co provides the effect of test2308 install, users can directly use test2308 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

test2308 install url in huggingface.co:

https://huggingface.co/amentaphd/test2308

Url of test2308

Provider of test2308 huggingface.co

amentaphd
ORGANIZATIONS

Other API from amentaphd

huggingface.co

Total runs: 10
Run Growth: 0
Growth Rate: 0.00%
Updated:June 17 2025
huggingface.co

Total runs: 2
Run Growth: 0
Growth Rate: 0.00%
Updated:June 18 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:June 18 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:July 23 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:September 10 2023