Synthyra / ESM2-150M

huggingface.co
Total runs: 539
24-hour runs: 0
7-day runs: 231
30-day runs: 231
Model's Last Updated: February 25 2026
fill-mask

Introduction of ESM2-150M

Model Details of ESM2-150M

FastESM

FastESM is a Huggingface compatible plug in version of ESM2 rewritten with a newer PyTorch attention implementation.

Load any ESM2 models into a FastEsm model to dramatically speed up training and inference without ANY cost in performance.

Outputting attention maps (or the contact prediction head) is not natively possible with SDPA. You can still pass output_attentions to have attention calculated manually and returned. Various other optimizations also make the base implementation slightly different than the one in transformers.

Use with 🤗 transformers
Supported models
model_dict = {
    # Synthyra/ESM2-8M
    'ESM2-8M': 'facebook/esm2_t6_8M_UR50D',
    # Synthyra/ESM2-35M
    'ESM2-35M': 'facebook/esm2_t12_35M_UR50D',
    # Synthyra/ESM2-150M
    'ESM2-150M': 'facebook/esm2_t30_150M_UR50D',
    # Synthyra/ESM2-650M
    'ESM2-650M': 'facebook/esm2_t33_650M_UR50D',
    # Synthyra/ESM2-3B
    'ESM2-3B': 'facebook/esm2_t36_3B_UR50D',
}
For working with embeddings
import torch
from transformers import AutoModel, AutoTokenizer

model_path = 'Synthyra/ESM2-8M'
model = AutoModel.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).eval()
tokenizer = model.tokenizer

sequences = ['MPRTEIN', 'MSEQWENCE']
tokenized = tokenizer(sequences, padding=True, return_tensors='pt')
with torch.no_grad():
    embeddings = model(**tokenized).last_hidden_state

print(embeddings.shape) # (2, 11, 1280)
For working with sequence logits
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer

model = AutoModelForMaskedLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).eval()
with torch.no_grad():
    logits = model(**tokenized).logits

print(logits.shape) # (2, 11, 33)
For working with attention maps
import torch
from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).eval()
with torch.no_grad():
    attentions = model(**tokenized, output_attentions).attentions # tuples of (batch_size, num_heads, seq_len, seq_len)

print(attentions[-1].shape) # (2, 20, 11, 11) 
Embed entire datasets with no new code

To embed a list of protein sequences fast , just call embed_dataset. Sequences are sorted to reduce padding tokens, so the initial progress bar estimation is usually much longer than the actual time.

embeddings = model.embed_dataset(
    sequences=sequences, # list of protein strings
    batch_size=16, # embedding batch size
    max_len=2048, # truncate to max_len
    full_embeddings=True, # return residue-wise embeddings
    full_precision=False, # store as float32
    pooling_type='mean', # use mean pooling if protein-wise embeddings
    num_workers=0, # data loading num workers
    sql=False, # return dictionary of sequences and embeddings
)

_ = model.embed_dataset(
    sequences=sequences, # list of protein strings
    batch_size=16, # embedding batch size
    max_len=2048, # truncate to max_len
    full_embeddings=True, # return residue-wise embeddings
    full_precision=False, # store as float32
    pooling_type='mean', # use mean pooling if protein-wise embeddings
    num_workers=0, # data loading num workers
    sql=True, # store sequences in local SQL database
    sql_db_path='embeddings.db', # path to .db file of choice
)
Citation

If you use any of this implementation or work please cite it (as well as the ESM2 paper).

@misc {FastESM2,
    author       = { Hallee, L. and Bichara, D. and Gleghorn, J, P. },
    title        = { FastESM2 },
    year         = 2024,
    url          = { https://huggingface.co/Synthyra/FastESM2_650 },
    doi          = { 10.57967/hf/3729 },
    publisher    = { Hugging Face }
}

Runs of Synthyra ESM2-150M on huggingface.co

539
Total runs
0
24-hour runs
37
3-day runs
231
7-day runs
231
30-day runs

More Information About ESM2-150M huggingface.co Model

ESM2-150M huggingface.co

ESM2-150M huggingface.co is an AI model on huggingface.co that provides ESM2-150M's model effect (), which can be used instantly with this Synthyra ESM2-150M model. huggingface.co supports a free trial of the ESM2-150M model, and also provides paid use of the ESM2-150M. Support call ESM2-150M model through api, including Node.js, Python, http.

Synthyra ESM2-150M online free

ESM2-150M huggingface.co is an online trial and call api platform, which integrates ESM2-150M's modeling effects, including api services, and provides a free online trial of ESM2-150M, you can try ESM2-150M online for free by clicking the link below.

Synthyra ESM2-150M online free url in huggingface.co:

https://huggingface.co/Synthyra/ESM2-150M

ESM2-150M install

ESM2-150M is an open source model from GitHub that offers a free installation service, and any user can find ESM2-150M on GitHub to install. At the same time, huggingface.co provides the effect of ESM2-150M install, users can directly use ESM2-150M installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

ESM2-150M install url in huggingface.co:

https://huggingface.co/Synthyra/ESM2-150M

Url of ESM2-150M

ESM2-150M huggingface.co Url

Provider of ESM2-150M huggingface.co

Synthyra
ORGANIZATIONS

Other API from Synthyra

huggingface.co

Total runs: 39.8K
Run Growth: 0
Growth Rate: 0.00%
Updated:November 15 2025
huggingface.co

Total runs: 1.1K
Run Growth: 18
Growth Rate: 1.60%
Updated:April 22 2026
huggingface.co

Total runs: 515
Run Growth: 173
Growth Rate: 33.59%
Updated:March 13 2026
huggingface.co

Total runs: 275
Run Growth: -445
Growth Rate: -161.82%
Updated:April 22 2026
huggingface.co

Total runs: 231
Run Growth: -367
Growth Rate: -161.67%
Updated:April 22 2026
huggingface.co

Total runs: 231
Run Growth: -378
Growth Rate: -164.35%
Updated:April 22 2026
huggingface.co

Total runs: 111
Run Growth: -176
Growth Rate: -158.56%
Updated:April 22 2026
huggingface.co

Total runs: 73
Run Growth: -257
Growth Rate: -467.27%
Updated:April 22 2026
huggingface.co

Total runs: 70
Run Growth: -311
Growth Rate: -471.21%
Updated:April 22 2026
huggingface.co

Total runs: 70
Run Growth: -332
Growth Rate: -474.29%
Updated:April 22 2026
huggingface.co

Total runs: 67
Run Growth: 9
Growth Rate: 13.43%
Updated:February 13 2026
huggingface.co

Total runs: 63
Run Growth: -339
Growth Rate: -574.58%
Updated:April 22 2026
huggingface.co

Total runs: 62
Run Growth: -185
Growth Rate: -298.39%
Updated:April 22 2026
huggingface.co

Total runs: 61
Run Growth: -190
Growth Rate: -316.67%
Updated:April 22 2026
huggingface.co

Total runs: 55
Run Growth: -98
Growth Rate: -196.00%
Updated:April 22 2026
huggingface.co

Total runs: 50
Run Growth: -291
Growth Rate: -593.88%
Updated:April 22 2026
huggingface.co

Total runs: 31
Run Growth: 0
Growth Rate: 0.00%
Updated:November 09 2024
huggingface.co

Total runs: 31
Run Growth: 31
Growth Rate: 100.00%
Updated:April 21 2026
huggingface.co

Total runs: 25
Run Growth: 0
Growth Rate: 0.00%
Updated:November 15 2025
huggingface.co

Total runs: 7
Run Growth: 6
Growth Rate: 85.71%
Updated:November 21 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:February 09 2026
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:January 22 2025