This model is a
PyCodeGPT
model further trained on text-to-code pairs
collected from public github repositories.
Training was performed with the CodeCLM objective, i.e. causal language modeling calculating loss only over code tokens and full embedding separation.
In order to use the model, first download it from the hub and have a look at the
evaluation section
.
Citation
BibTeX:
@inproceedings{christopoulou-etal-2024-text,
title = "Text-to-Code Generation with Modality-relative Pre-training",
author = "Christopoulou, Fenia and
Zhang, Guchun and
Lampouras, Gerasimos",
editor = "Graham, Yvette and
Purver, Matthew",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = mar,
year = "2024",
address = "St. Julian{'}s, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-long.72",
pages = "1194--1208",
abstract = "Large pre-trained language models have recently been expanded and applied to programming language tasks with great success, often through further pre-training of a strictly-natural language model{--}where training sequences typically contain both natural and (linearised) programming language. Such approaches effectively map both modalities of the sequence into the same embedding space. However, programming language keywords (e.g. {``}while{''}) often have very strictly defined semantics. As such, transfer learning from their natural language usage may not necessarily be beneficial to their code application and vise versa. Assuming an already pre-trained language model, in this work we investigate how sequence tokens can be adapted and represented differently, depending on which modality they belong to, and to the ultimate benefit of the downstream task. We experiment with separating embedding spaces between modalities during further model pre-training with modality-relative training objectives. We focus on text-to-code generation and observe consistent improvements across two backbone models and two test sets, measuring pass@$k$ and a novel incremental variation.",
}
Runs of huawei-noah pycodegpt-CodeCLM-full-100m on huggingface.co
3
Total runs
0
24-hour runs
0
3-day runs
2
7-day runs
-1
30-day runs
More Information About pycodegpt-CodeCLM-full-100m huggingface.co Model
pycodegpt-CodeCLM-full-100m huggingface.co
pycodegpt-CodeCLM-full-100m huggingface.co is an AI model on huggingface.co that provides pycodegpt-CodeCLM-full-100m's model effect (), which can be used instantly with this huawei-noah pycodegpt-CodeCLM-full-100m model. huggingface.co supports a free trial of the pycodegpt-CodeCLM-full-100m model, and also provides paid use of the pycodegpt-CodeCLM-full-100m. Support call pycodegpt-CodeCLM-full-100m model through api, including Node.js, Python, http.
pycodegpt-CodeCLM-full-100m huggingface.co is an online trial and call api platform, which integrates pycodegpt-CodeCLM-full-100m's modeling effects, including api services, and provides a free online trial of pycodegpt-CodeCLM-full-100m, you can try pycodegpt-CodeCLM-full-100m online for free by clicking the link below.
huawei-noah pycodegpt-CodeCLM-full-100m online free url in huggingface.co:
pycodegpt-CodeCLM-full-100m is an open source model from GitHub that offers a free installation service, and any user can find pycodegpt-CodeCLM-full-100m on GitHub to install. At the same time, huggingface.co provides the effect of pycodegpt-CodeCLM-full-100m install, users can directly use pycodegpt-CodeCLM-full-100m installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
pycodegpt-CodeCLM-full-100m install url in huggingface.co: