Synthyra / Profluent-E1-600M

huggingface.co
Total runs: 625
24-hour runs: 0
7-day runs: -6
30-day runs: 2
Model's Last Updated: November 27 2025
fill-mask

Introduction of Profluent-E1-600M

Model Details of Profluent-E1-600M

NOTE

The GitHub with the implementation and requirements.txt can be found here

Profluent-E1

Synthyra's version of Profluent-E1 is a faithful implementation of Profluent's E1 models ( license ) that integrates Huggingface AutoModel compatability and nice embedding functionality.

Use with 🤗 transformers
Supported models
model_dict = {
    # Synthyra/Profluent-E1-150M
    'Profluent-E1-150M': 'Profluent-Bio/E1-150m',
    # Synthyra/Profluent-E1-150M
    'Profluent-E1-300M': 'Profluent-Bio/E1-300m',
    # Synthyra/Profluent-E1-150M
    'Profluent-E1-600M': 'Profluent-Bio/E1-600m',
}
import torch
from transformers import AutoModelForMaskedLM

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForMaskedLM.from_pretrained('Synthyra/Profluent-E1-150M', trust_remote_code=True, dtype=torch.bfloat16).eval().to(device)

sequences = ['MPRTEIN', 'MSEQWENCE']
batch = model.prep_tokens.get_batch_kwargs(sequences, device=device)

output = model(**batch) # get all hidden states with output_hidden_states=True
print(output.logits.shape) # language modeling logits, (batch_size, seq_len, vocab_size), (2, 11, 34)
print(output.last_hidden_state.shape) # last hidden state of the model, (batch_size, seq_len, hidden_size), (2, 11, 768)
print(output.loss) # language modeling loss if you passed labels
#print(output.hidden_states) # all hidden states if you passed output_hidden_states=True (in tuple)
#print(outout.attentions) # all attention matrices if you passed output_attentions=True (in tuple)

Our E1 implementation also supports sequence and token level classification tasks like ESM2. Simply pass the number of labels during initialization.

from transformers import AutoModelForSequenceClassification, AutoModelForTokenClassification

model = AutoModelForSequenceClassification.from_pretrained('Synthyra/Profluent-E1-150M', num_labels=2, trust_remote_code=True)
logits = model(**batch, labels=labels).logits
print(logits.shape) # (batch_size, num_labels), (2, 2)

E1 weights were trained in bf16 and are in bf16 by default. You can load them in the precision of your choosing by leveraging the dtype parameter:

import torch
model = AutoModelForMaskedLM.from_pretrained('Synthyra/Profluent-E1-150M', trust_remote_code=True, dtype=torch.float) # fp32
Embed entire datasets with no new code

To embed a list of protein sequences fast , just call embed_dataset. Sequences are sorted to reduce padding tokens, so the initial progress bar estimation is usually much longer than the actual time it will take.

Example:

embedding_dict = model.embed_dataset(
    sequences=[
        'MALWMRLLPLLALLALWGPDPAAA', ... # list of protein sequences
    ],
    batch_size=2, # adjust for your GPU memory
    max_len=512, # adjust for your needs
    full_embeddings=False, # if True, no pooling is performed
    embed_dtype=torch.float32, # cast to what dtype you want
    pooling_types=['mean', 'cls'], # more than one pooling type will be concatenated together
    sql=False, # if True, embeddings will be stored in SQLite database
    sql_db_path='embeddings.db',
    save=True, # if True, embeddings will be saved as a .pth file
    save_path='embeddings.pth',
)
# embedding_dict is a dictionary mapping sequences to their embeddings as tensors for .pth or numpy arrays for sql
model.embed_dataset()
Args:
    sequences: List of protein sequences
    batch_size: Batch size for processing
    max_len: Maximum sequence length
    full_embeddings: Whether to return full residue-wise (True) embeddings or pooled (False)
    pooling_type: Type of pooling ('mean' or 'cls')
    sql: Whether to store embeddings in SQLite database - will be stored in float32
    sql_db_path: Path to SQLite database
    
Returns:
    Dictionary mapping sequences to embeddings, or None if sql=True

Note:
    - If sql=True, embeddings can only be stored in float32
    - sql is ideal if you need to stream a very large dataset for training in real-time
    - save=True is ideal if you can store the entire embedding dictionary in RAM
    - sql will be used if it is True and save is True or False
    - If your sql database or .pth file is already present, they will be scanned first for already embedded sequences
    - Sequences will be truncated to max_len and sorted by length in descending order for faster processing
Fine-tuning with 🤗 peft
model = AutoModelForSequenceClassification.from_pretrained('Synthyra/Profluent-E1-150M', num_labels=2, trust_remote_code=True)
# these modules handle E1 attention layers
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj"]

lora_config = LoraConfig(
    r=8, # choose lora parameters to your liking
    lora_alpha=16,
    lora_dropout=0.01,
    bias="none",
    target_modules=target_modules,
)

# Apply LoRA to the model
model = get_peft_model(model, lora_config)

# Unfreeze the classifier head
for param in model.classifier.parameters():
    param.requires_grad = True

For a more thourough example of fine-tuning, check out our example script here .

Citation

If you use any of this implementation or work please cite the following DOI and Profluent's paper.

@misc {FastPLMs,
    author       = { Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
    title        = { FastPLMs: Fast, efficient, protien language model inference from Huggingface AutoModel.},
    year         = {2024},
    url          = { https://huggingface.co/Synthyra/ESMplusplus_small },
    DOI          = { 10.57967/hf/3726 },
    publisher    = { Hugging Face }
}
 @article{Jain_Beazer_Ruffolo_Bhatnagar_Madani_2025,
    title={E1: Retrieval-Augmented Protein Encoder Models},
    url={https://www.biorxiv.org/content/early/2025/11/13/2025.11.12.688125},
    DOI={10.1101/2025.11.12.688125},
    journal={bioRxiv},
    publisher={Cold Spring Harbor Laboratory},
    author={Jain, Sarthak and Beazer, Joel and Ruffolo, Jeffrey A and Bhatnagar, Aadyot and Madani, Ali},
    year={2025}
}

Runs of Synthyra Profluent-E1-600M on huggingface.co

625
Total runs
0
24-hour runs
-9
3-day runs
-6
7-day runs
2
30-day runs

More Information About Profluent-E1-600M huggingface.co Model

Profluent-E1-600M huggingface.co

Profluent-E1-600M huggingface.co is an AI model on huggingface.co that provides Profluent-E1-600M's model effect (), which can be used instantly with this Synthyra Profluent-E1-600M model. huggingface.co supports a free trial of the Profluent-E1-600M model, and also provides paid use of the Profluent-E1-600M. Support call Profluent-E1-600M model through api, including Node.js, Python, http.

Profluent-E1-600M huggingface.co Url

https://huggingface.co/Synthyra/Profluent-E1-600M

Synthyra Profluent-E1-600M online free

Profluent-E1-600M huggingface.co is an online trial and call api platform, which integrates Profluent-E1-600M's modeling effects, including api services, and provides a free online trial of Profluent-E1-600M, you can try Profluent-E1-600M online for free by clicking the link below.

Synthyra Profluent-E1-600M online free url in huggingface.co:

https://huggingface.co/Synthyra/Profluent-E1-600M

Profluent-E1-600M install

Profluent-E1-600M is an open source model from GitHub that offers a free installation service, and any user can find Profluent-E1-600M on GitHub to install. At the same time, huggingface.co provides the effect of Profluent-E1-600M install, users can directly use Profluent-E1-600M installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

Profluent-E1-600M install url in huggingface.co:

https://huggingface.co/Synthyra/Profluent-E1-600M

Url of Profluent-E1-600M

Profluent-E1-600M huggingface.co Url

Provider of Profluent-E1-600M huggingface.co

Synthyra
ORGANIZATIONS

Other API from Synthyra

huggingface.co

Total runs: 39.8K
Run Growth: 0
Growth Rate: 0.00%
Updated:November 15 2025
huggingface.co

Total runs: 1.1K
Run Growth: 18
Growth Rate: 1.60%
Updated:April 22 2026
huggingface.co

Total runs: 539
Run Growth: 231
Growth Rate: 42.86%
Updated:February 25 2026
huggingface.co

Total runs: 515
Run Growth: 173
Growth Rate: 33.59%
Updated:March 13 2026
huggingface.co

Total runs: 287
Run Growth: -454
Growth Rate: -158.19%
Updated:April 22 2026
huggingface.co

Total runs: 241
Run Growth: -385
Growth Rate: -159.75%
Updated:April 22 2026
huggingface.co

Total runs: 227
Run Growth: -330
Growth Rate: -138.66%
Updated:April 22 2026
huggingface.co

Total runs: 120
Run Growth: -165
Growth Rate: -137.50%
Updated:April 22 2026
huggingface.co

Total runs: 83
Run Growth: -332
Growth Rate: -400.00%
Updated:April 22 2026
huggingface.co

Total runs: 74
Run Growth: -193
Growth Rate: -260.81%
Updated:April 22 2026
huggingface.co

Total runs: 72
Run Growth: -178
Growth Rate: -247.22%
Updated:April 22 2026
huggingface.co

Total runs: 67
Run Growth: 9
Growth Rate: 13.43%
Updated:February 13 2026
huggingface.co

Total runs: 66
Run Growth: -286
Growth Rate: -371.43%
Updated:April 22 2026
huggingface.co

Total runs: 61
Run Growth: -290
Growth Rate: -475.41%
Updated:April 22 2026
huggingface.co

Total runs: 59
Run Growth: -306
Growth Rate: -430.99%
Updated:April 22 2026
huggingface.co

Total runs: 55
Run Growth: -218
Growth Rate: -320.59%
Updated:April 22 2026
huggingface.co

Total runs: 50
Run Growth: -58
Growth Rate: -90.63%
Updated:April 22 2026
huggingface.co

Total runs: 31
Run Growth: 0
Growth Rate: 0.00%
Updated:November 09 2024
huggingface.co

Total runs: 31
Run Growth: 31
Growth Rate: 100.00%
Updated:April 21 2026
huggingface.co

Total runs: 25
Run Growth: 0
Growth Rate: 0.00%
Updated:November 15 2025
huggingface.co

Total runs: 7
Run Growth: 6
Growth Rate: 85.71%
Updated:November 21 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:February 09 2026
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:January 22 2025