interneuronai / az-stablelm

huggingface.co
Total runs: 1
24-hour runs: -1
7-day runs: -2
30-day runs: -1
Model's Last Updated: March 10 2024

Introduction of az-stablelm

Model Details of az-stablelm

Model Details

Original Model: stabilityai/stablelm-2-1_6b
Fine-Tuned For: Azerbaijani language understanding and generation
Dataset Used: Azerbaijani translation of the Stanford Alpaca dataset
Fine-Tuning Method: Self-instruct method

This model, is part of the "project/Barbarossa" initiative, aimed at enhancing natural language processing capabilities for the Azerbaijani language. By fine-tuning this model on the Azerbaijani translation of the Stanford Alpaca dataset using the self-instruct method, we've made significant strides in improving AI's understanding and generation of Azerbaijani text.

Our primary objective with this model is to offer insights into the feasibility and outcomes of fine-tuning large language models (LLMs) for the Azerbaijani language. The fine-tuning process was undertaken with limited resources, providing valuable learnings rather than creating a model ready for production use. Therefore, we recommend treating this model as a reference or a guide to understanding the potential and challenges involved in fine-tuning LLMs for specific languages. It serves as a foundational step towards further research and development rather than a direct solution for production environments.

This project is a proud product of the Alas Development Center (ADC) . We are thrilled to offer these finely-tuned large language models to the public, free of charge.

How to use?

from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, pipeline

model_path = "alasdevcenter/az-stablelm"

model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)

pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)

instruction = "Təbiətin qorunması  "
formatted_prompt = f"""Aşağıda daha çox kontekst təmin edən təlimat var. Sorğunu adekvat şəkildə tamamlayan cavab yazın.
                ### Təlimat:
                {instruction}
                ### Cavab:
                """

result = pipe(formatted_prompt)
print(result[0]['generated_text'])

Runs of interneuronai az-stablelm on huggingface.co

1
Total runs
-1
24-hour runs
-2
3-day runs
-2
7-day runs
-1
30-day runs

More Information About az-stablelm huggingface.co Model

az-stablelm huggingface.co

az-stablelm huggingface.co is an AI model on huggingface.co that provides az-stablelm's model effect (), which can be used instantly with this interneuronai az-stablelm model. huggingface.co supports a free trial of the az-stablelm model, and also provides paid use of the az-stablelm. Support call az-stablelm model through api, including Node.js, Python, http.

interneuronai az-stablelm online free

az-stablelm huggingface.co is an online trial and call api platform, which integrates az-stablelm's modeling effects, including api services, and provides a free online trial of az-stablelm, you can try az-stablelm online for free by clicking the link below.

interneuronai az-stablelm online free url in huggingface.co:

https://huggingface.co/interneuronai/az-stablelm

az-stablelm install

az-stablelm is an open source model from GitHub that offers a free installation service, and any user can find az-stablelm on GitHub to install. At the same time, huggingface.co provides the effect of az-stablelm install, users can directly use az-stablelm installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

az-stablelm install url in huggingface.co:

https://huggingface.co/interneuronai/az-stablelm

Url of az-stablelm

Provider of az-stablelm huggingface.co

interneuronai
ORGANIZATIONS

Other API from interneuronai

huggingface.co

Total runs: 1
Run Growth: -2
Growth Rate: -200.00%
Updated:March 10 2024