Funnel Transformer large model (B8-8-8 with decoder)
Pretrained model on English language using a similar objective as
ELECTRA
. It was introduced in
this paper
and first released in
this repository
. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the
model hub
to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
How to use
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large")
model = FunneModel.from_pretrained("funnel-transformer/large")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
and in TensorFlow:
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large")
model = TFFunnelModel.from_pretrained("funnel-transformer/large")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
Training data
The BERT model was pretrained on:
BookCorpus
, a dataset consisting of 11,038 unpublished books,
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Runs of funnel-transformer large on huggingface.co
large huggingface.co is an AI model on huggingface.co that provides large's model effect (), which can be used instantly with this funnel-transformer large model. huggingface.co supports a free trial of the large model, and also provides paid use of the large. Support call large model through api, including Node.js, Python, http.
large huggingface.co is an online trial and call api platform, which integrates large's modeling effects, including api services, and provides a free online trial of large, you can try large online for free by clicking the link below.
funnel-transformer large online free url in huggingface.co:
large is an open source model from GitHub that offers a free installation service, and any user can find large on GitHub to install. At the same time, huggingface.co provides the effect of large install, users can directly use large installed effect in huggingface.co for debugging and trial. It also supports api for free installation.