SentEconBERT is a EconBERT-based language model specifically fine-tuned for sentiment analysis on economic and financial text. The model is designed to capture domain-specific language patterns, terminology, and contextual relationships in economic literature, research papers, financial reports, and related documents.
Note
: The complete details of model architecture, training methodology, evaluation, and performance metrics are available in our paper. Please refer to the citation section below.
Intended Uses & Limitations
Intended Uses
Economic Text Classification
: Categorizing economic documents, papers, or news articles
Sentiment Analysis
: Analyzing market sentiment in financial news and reports
Information Extraction
: Extracting structured data from unstructured economic texts
etc.
Limitations
The model is specialized for economic and financial domains and may not perform as well on general text
For detailed discussion of limitations, please refer to our paper
Training Data
SentEconBERT was trained on the FinancialPhraseBank dataset. For comprehensive information about the training data, including sources, size, preprocessing steps, and other details, please refer to our paper.
Evaluation Results
We evaluated EconBERT on several economic NLP tasks and compared its performance with general-purpose and other domain-specific models. The detailed evaluation methodology and complete results are available in our paper.
Key findings include:
Improved performance on economic domain tasks compared to general BERT models
State-of-the-art results on [specific tasks, if applicable]
[Any other high-level results worth highlighting]
How to Use
from transformers import AutoTokenizer, AutoModel
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("YourUsername/EconBERT")
model = AutoModel.from_pretrained("YourUsername/EconBERT")
# Example usage
text = "The Federal Reserve increased interest rates by 25 basis points."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
For task-specific fine-tuning and applications, please refer to our paper and the examples provided in our GitHub repository.
Citation
If you use EconBERT in your research, please cite our paper:
@article{LastName2025econbert,
title={EconBERT: A Large Language Model for Economics},
author={Zhang, Philip and Rojcek, Jakub and Leippold, Markus},
journal={SSRN Working Paper},
year={2025},
volume={},
pages={},
publisher={University of Zurich},
doi={}
}
Additional Information
Model Type
: BERT
Language(s)
: English
License
: MIT
For more detailed information about model architecture, training methodology, evaluation results, and applications, please refer to our paper.
Runs of beethogedeon SentEconBert on huggingface.co
2
Total runs
0
24-hour runs
0
3-day runs
2
7-day runs
-3
30-day runs
More Information About SentEconBert huggingface.co Model
SentEconBert huggingface.co is an AI model on huggingface.co that provides SentEconBert's model effect (), which can be used instantly with this beethogedeon SentEconBert model. huggingface.co supports a free trial of the SentEconBert model, and also provides paid use of the SentEconBert. Support call SentEconBert model through api, including Node.js, Python, http.
SentEconBert huggingface.co is an online trial and call api platform, which integrates SentEconBert's modeling effects, including api services, and provides a free online trial of SentEconBert, you can try SentEconBert online for free by clicking the link below.
beethogedeon SentEconBert online free url in huggingface.co:
SentEconBert is an open source model from GitHub that offers a free installation service, and any user can find SentEconBert on GitHub to install. At the same time, huggingface.co provides the effect of SentEconBert install, users can directly use SentEconBert installed effect in huggingface.co for debugging and trial. It also supports api for free installation.