FastESM is a Huggingface compatible plug in version of ESM2 rewritten with a newer PyTorch attention implementation.
Load any ESM2 models into a FastEsm model to dramatically speed up training and inference without
ANY
cost in performance.
Outputting attention maps (or the contact prediction head) is not natively possible with SDPA. You can still pass
output_attentions
to have attention calculated manually and returned.
Various other optimizations also make the base implementation slightly different than the one in transformers.
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).eval()
with torch.no_grad():
logits = model(**tokenized).logits
print(logits.shape) # (2, 11, 33)
For working with attention maps
import torch
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).eval()
with torch.no_grad():
attentions = model(**tokenized, output_attentions).attentions # tuples of (batch_size, num_heads, seq_len, seq_len)print(attentions[-1].shape) # (2, 20, 11, 11)
Embed entire datasets with no new code
To embed a list of protein sequences
fast
, just call embed_dataset. Sequences are sorted to reduce padding tokens, so the initial progress bar estimation is usually much longer than the actual time.
embeddings = model.embed_dataset(
sequences=sequences, # list of protein strings
batch_size=16, # embedding batch size
max_len=2048, # truncate to max_len
full_embeddings=True, # return residue-wise embeddings
full_precision=False, # store as float32
pooling_type='mean', # use mean pooling if protein-wise embeddings
num_workers=0, # data loading num workers
sql=False, # return dictionary of sequences and embeddings
)
_ = model.embed_dataset(
sequences=sequences, # list of protein strings
batch_size=16, # embedding batch size
max_len=2048, # truncate to max_len
full_embeddings=True, # return residue-wise embeddings
full_precision=False, # store as float32
pooling_type='mean', # use mean pooling if protein-wise embeddings
num_workers=0, # data loading num workers
sql=True, # store sequences in local SQL database
sql_db_path='embeddings.db', # path to .db file of choice
)
Citation
If you use any of this implementation or work please cite it (as well as the
ESM2
paper).
@misc {FastESM2,
author = { Hallee, L. and Bichara, D. and Gleghorn, J, P. },
title = { FastESM2 },
year = 2024,
url = { https://huggingface.co/Synthyra/FastESM2_650 },
doi = { 10.57967/hf/3729 },
publisher = { Hugging Face }
}
Runs of Synthyra ESM2-150M on huggingface.co
539
Total runs
0
24-hour runs
37
3-day runs
231
7-day runs
231
30-day runs
More Information About ESM2-150M huggingface.co Model
ESM2-150M huggingface.co
ESM2-150M huggingface.co is an AI model on huggingface.co that provides ESM2-150M's model effect (), which can be used instantly with this Synthyra ESM2-150M model. huggingface.co supports a free trial of the ESM2-150M model, and also provides paid use of the ESM2-150M. Support call ESM2-150M model through api, including Node.js, Python, http.
ESM2-150M huggingface.co is an online trial and call api platform, which integrates ESM2-150M's modeling effects, including api services, and provides a free online trial of ESM2-150M, you can try ESM2-150M online for free by clicking the link below.
Synthyra ESM2-150M online free url in huggingface.co:
ESM2-150M is an open source model from GitHub that offers a free installation service, and any user can find ESM2-150M on GitHub to install. At the same time, huggingface.co provides the effect of ESM2-150M install, users can directly use ESM2-150M installed effect in huggingface.co for debugging and trial. It also supports api for free installation.