Clemylia / Loo-Aricate

huggingface.co
Total runs: 54
24-hour runs: 0
7-day runs: 0
30-day runs: 54
Model's Last Updated: November 14 2025
text-generation

Introduction of Loo-Aricate

Model Details of Loo-Aricate

🍪 LOO-ARICATE 🍪

Loo

Loo-aricate est un SLM (small language model) d'IA crée from scratch avec l'architecture Aricate v4. Loo est un micro-Slm de séquences numérique, léger, entraîne sur peu de données (mots) - ou plutôt chiffre dans son contexte personnel

📘 Usage 📕

Loo-Aricate a pour unique tâche de générer des séquences de nombres, des calculs et des chiffres sous demande (prompt) de l'utilisateur. 🛑 Contrairement à la majorité des SLM ARICATE, Il n'a pas été entraîné avec des mots.

les seuls mots qu'il a vue sont ceux des prompts utilisateurs, et il ne peut pas lui-même utiliser de mots, ses réponses ne peuvent contenir que des chiffres et des symboles qu'il a vue durant son entraînement (1,2,3 etc...).

📑 Utilisation

pour tester LOO-ARICATE vous pouvez utiliser le code d'inférence ci dessous

import torch
import torch.nn as nn
import torch.nn.functional as F
import json
import os
import collections
import heapq
# Importations des librairies nécessaires pour le chargement
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file as load_safetensors_file

# --- A. AricateAttentionLayer (Inchangé) ---
class AricateAttentionLayer(nn.Module):
    # ... (code inchangé) ...
    """Couche d'Attention Additive (Bahdanau)."""
    def __init__(self, hidden_dim):
        super(AricateAttentionLayer, self).__init__()
        self.W = nn.Linear(hidden_dim, hidden_dim)
        self.U = nn.Linear(hidden_dim, hidden_dim)
        self.V = nn.Linear(hidden_dim, 1, bias=False)
    def forward(self, rnn_outputs, last_hidden):
        last_hidden_expanded = last_hidden.unsqueeze(1)
        energy = torch.tanh(self.W(rnn_outputs) + self.U(last_hidden_expanded))
        attention_weights_raw = self.V(energy).squeeze(2)
        attention_weights = F.softmax(attention_weights_raw, dim=1)
        context_vector = torch.sum(rnn_outputs * attention_weights.unsqueeze(2), dim=1)
        return context_vector

# --- B. AricateModel (Inchangé) ---
class AricateModel(nn.Module):
    # ... (code inchangé) ...
    """Architecture Aricate V4, adaptée pour le rechargement."""
    def __init__(self, vocab_size: int, embedding_dim: int, hidden_dim: int, num_layers: int = 1, config: dict = None):
        super(AricateModel, self).__init__()

        if config is not None:
             vocab_size = config.get("vocab_size", vocab_size)
             embedding_dim = config.get("embedding_dim", embedding_dim)
             hidden_dim = config.get("hidden_dim", hidden_dim)
             num_layers = config.get("num_layers", num_layers)

        self.vocab_size = vocab_size
        self.embedding_dim = embedding_dim
        self.hidden_dim = hidden_dim
        self.num_layers = num_layers

        self.word_embeddings = nn.Embedding(num_embeddings=vocab_size, embedding_dim=embedding_dim, padding_idx=0)
        self.rnn = nn.GRU(input_size=embedding_dim, hidden_size=hidden_dim, num_layers=num_layers, batch_first=True)
        self.attention = AricateAttentionLayer(hidden_dim)
        self.hidden_to_vocab = nn.Linear(hidden_dim * 2, vocab_size)

    def forward(self, input_words):
        embeds = self.word_embeddings(input_words)
        rnn_out, hn = self.rnn(embeds)
        last_hidden = hn[-1]
        context_vector = self.attention(rnn_out, last_hidden)
        combined_features = torch.cat((context_vector, last_hidden), dim=1)
        logits = self.hidden_to_vocab(combined_features)
        return logits

# --- C. WordTokenizer (Inchangé) ---
class WordTokenizer:
    # ... (code inchangé) ...
    """Tokenizer Aricate adapté pour recharger à partir du vocabulaire publié."""
    def __init__(self, word_to_id: dict):
        self.word_to_id = word_to_id
        self.id_to_word = {id: word for word, id in word_to_id.items()}
        self.vocab_size = len(word_to_id)
        self.special_tokens = {
            '<pad>': word_to_id['<pad>'],
            '<unk>': word_to_id['<unk>'],
            '<eos>': word_to_id['<eos>'],
            '<sep>': word_to_id['<sep>'],
        }

    def encode(self, text, add_eos=False):
        words = text.lower().split()
        if add_eos:
            words.append('<eos>')
        ids = [self.word_to_id.get(word, self.word_to_id['<unk>']) for word in words]
        return ids

    def decode(self, ids):
        words = [self.id_to_word.get(id, '<unk>') for id in ids]
        return " ".join(word for word in words if word not in ['<pad>', '<unk>', '<eos>', '<sep>'])

# --- D. Fonction de Génération (MODIFIÉE pour Top-K Sampling et Temperature) ---
def generate_sequence(model, tokenizer, question, max_length, max_len_input, temperature=1.0, top_k=None):
    """
    Génère la réponse en utilisant Top-K Sampling et Temperature.
    
    Args:
        temperature (float): Ajuste la créativité (T > 1.0) ou la prudence (T < 1.0).
        top_k (int/None): Limite le choix aux K mots les plus probables pour l'échantillonnage.
    """
    model.eval()
    
    sep_id = tokenizer.special_tokens['<sep>']
    eos_id = tokenizer.special_tokens['<eos>']

    question_ids = tokenizer.encode(question)
    current_sequence = question_ids + [sep_id]
    
    print(f"\n--- Q/A Génération (Sampling | T={temperature:.2f} | K={top_k if top_k else 'désactivé'}) ---")
    print(f"Question: '{question}'")

    with torch.no_grad():
        for _ in range(max_length):
            
            # Préparer l'entrée
            input_ids_to_pad = current_sequence[-max_len_input:] if len(current_sequence) > max_len_input else current_sequence
            padding_needed = max_len_input - len(input_ids_to_pad)
            input_ids_padded = [tokenizer.special_tokens['<pad>']] * padding_needed + input_ids_to_pad
            input_tensor = torch.tensor(input_ids_padded).unsqueeze(0)

            # 1. Obtention des logits
            logits = model(input_tensor).squeeze(0)

            # 2. Application de la Temperature
            if temperature != 1.0 and temperature > 0:
                logits = logits / temperature

            # 3. Application du Top-K
            if top_k is not None:
                # Filtrer les logits pour ne garder que le top_k
                values, indices = torch.topk(logits, k=top_k)
                
                # Créer un masque (tensor rempli de -inf)
                mask = torch.ones_like(logits) * float('-inf')
                
                # Mettre à jour le masque avec les valeurs filtrées
                logits = torch.scatter(mask, dim=0, index=indices, src=values)

            # 4. Convertir en probabilités et échantillonner
            probabilities = F.softmax(logits, dim=-1)
            
            # S'assurer que les probabilités somment à 1
            if top_k is not None:
                probabilities = probabilities.div(probabilities.sum())
            
            predicted_id = torch.multinomial(probabilities, num_samples=1).item()

            # 5. Mettre à jour la séquence
            current_sequence.append(predicted_id)

            if predicted_id == eos_id:
                break

    # 6. Décodage
    try:
        sep_index = current_sequence.index(sep_id)
        response_ids = [id for id in current_sequence[sep_index+1:] if id != eos_id]
    except ValueError:
        response_ids = current_sequence

    final_response = tokenizer.decode(response_ids)
    
    # Dans le sampling, on n'a pas de score de log-probabilité unique comme dans Beam Search.
    print(f"Réponse générée: '{final_response}'")
    print("-" * 40)
    
    return final_response

# --- E. Fonction de Chargement du Modèle Lam-2 (Inchangée) ---
def load_lam2_model(repo_id: str):
    # ... (code inchangé) ...
    """
    Télécharge et charge le modèle Lam-2 et son tokenizer depuis Hugging Face.
    """
    print(f"--- Chargement de Lam-2 depuis {repo_id} ---")

    # 1. Télécharger le tokenizer
    tokenizer_path = hf_hub_download(repo_id=repo_id, filename="aricate_tokenizer.txt")
    with open(tokenizer_path, 'r', encoding='utf-8') as f:
        word_to_id = json.load(f)
    tokenizer = WordTokenizer(word_to_id)
    print(f"Tokenizer chargé. Taille du vocabulaire: {tokenizer.vocab_size}")

    # 2. Télécharger la configuration
    config_path = hf_hub_download(repo_id=repo_id, filename="config.json")
    with open(config_path, 'r') as f:
        model_config = json.load(f)
    print("Configuration du modèle chargée.")

    # 3. Initialiser le modèle
    model = AricateModel(
        vocab_size=model_config['vocab_size'],
        embedding_dim=model_config['embedding_dim'],
        hidden_dim=model_config['hidden_dim'],
        config=model_config
    )

    # 4. Télécharger et charger les poids Safetensors
    weights_path = hf_hub_download(repo_id=repo_id, filename="model.safetensors")
    state_dict = load_safetensors_file(weights_path)

    model.load_state_dict(state_dict)
    print("Poids du modèle Safetensors chargés avec succès.")

    MAX_LEN_INPUT_DEFAULT = 30

    print("-" * 40)
    return model, tokenizer, MAX_LEN_INPUT_DEFAULT

# --- F. Bloc principal d'exécution (MISE À JOUR) ---
if __name__ == '__main__':

    LAM2_REPO_ID = "Clemylia/Loo-Aricate"
    MAX_GENERATION_LENGTH = 15
    
    # 🚨 NOUVEAUX PARAMÈTRES POUR LE TEST 🚨
    TEST_TEMPERATURE = 0.6 # > 1.0 pour plus de créativité/aléatoire
    TEST_TOP_K = 10         # Limite le choix aux 10 mots les plus probables

    test_questions = [
        "Envoie moi une suite de 2 chiffres",
        "Suite de nombres avec des centaines",
        "Calcul de base",
       "Écris 6 chiffres, n'importe lesquels",
       "Suite de nombres avec des centaines",
    ]

    try:
        # 1. Chargement du modèle
        lam2_model, lam2_tokenizer, max_len_input = load_lam2_model(LAM2_REPO_ID)

        print(f"\n>>> TEST D'INFÉRENCE LAM-2 EN MODE CRÉATIF (T={TEST_TEMPERATURE}, K={TEST_TOP_K}) <<<")

        # 2. Infèrence (Appel à la nouvelle fonction)
        for question in test_questions:
            generate_sequence( # Remplacement de generate_sequence_beam
                model=lam2_model,
                tokenizer=lam2_tokenizer,
                question=question,
                max_length=MAX_GENERATION_LENGTH,
                max_len_input=max_len_input,
                temperature=TEST_TEMPERATURE,
                top_k=TEST_TOP_K
            )

    except Exception as e:
        print(f"\n❌ Une erreur est survenue lors du chargement ou de l'inférence.")
        print(f"Détail de l'erreur: {e}")
        print("Vérifiez l'installation des dépendances et le REPO_ID.")

🛑 Limitations 🛑

si LOO-Aricate ne suit pas votre demande, ne vous donne pas exactement 2 chiffres lorsque vous lui demandez ou autre ...

c'est parfaitement normal. Loo-Aricate est un SLM aricate de taille reduire (298 chiffres contre 5000 a 18000 mots pour la moyenne des autres SLM aricate)

il est extrêmement petit et compact, et n'a été crée, seulement pour la génération générique de liste de nombres et de symboles.

🍪 License ✨

il n'est pas complètement open source 🛑 Merci de regarder le fichier de notre license

Runs of Clemylia Loo-Aricate on huggingface.co

54
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
54
30-day runs

More Information About Loo-Aricate huggingface.co Model

More Loo-Aricate license Visit here:

https://choosealicense.com/licenses/other

Loo-Aricate huggingface.co

Loo-Aricate huggingface.co is an AI model on huggingface.co that provides Loo-Aricate's model effect (), which can be used instantly with this Clemylia Loo-Aricate model. huggingface.co supports a free trial of the Loo-Aricate model, and also provides paid use of the Loo-Aricate. Support call Loo-Aricate model through api, including Node.js, Python, http.

Loo-Aricate huggingface.co Url

https://huggingface.co/Clemylia/Loo-Aricate

Clemylia Loo-Aricate online free

Loo-Aricate huggingface.co is an online trial and call api platform, which integrates Loo-Aricate's modeling effects, including api services, and provides a free online trial of Loo-Aricate, you can try Loo-Aricate online for free by clicking the link below.

Clemylia Loo-Aricate online free url in huggingface.co:

https://huggingface.co/Clemylia/Loo-Aricate

Loo-Aricate install

Loo-Aricate is an open source model from GitHub that offers a free installation service, and any user can find Loo-Aricate on GitHub to install. At the same time, huggingface.co provides the effect of Loo-Aricate install, users can directly use Loo-Aricate installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

Loo-Aricate install url in huggingface.co:

https://huggingface.co/Clemylia/Loo-Aricate

Url of Loo-Aricate

Loo-Aricate huggingface.co Url

Provider of Loo-Aricate huggingface.co

Clemylia
ORGANIZATIONS

Other API from Clemylia

huggingface.co

Total runs: 439
Run Growth: 79
Growth Rate: 18.00%
Updated:September 29 2025
huggingface.co

Total runs: 364
Run Growth: 41
Growth Rate: 11.26%
Updated:October 30 2025
huggingface.co

Total runs: 316
Run Growth: 316
Growth Rate: 100.00%
Updated:December 02 2025
huggingface.co

Total runs: 263
Run Growth: 263
Growth Rate: 100.00%
Updated:November 05 2025
huggingface.co

Total runs: 214
Run Growth: 214
Growth Rate: 100.00%
Updated:November 23 2025
huggingface.co

Total runs: 197
Run Growth: 28
Growth Rate: 14.21%
Updated:October 30 2025
huggingface.co

Total runs: 168
Run Growth: 23
Growth Rate: 13.69%
Updated:October 19 2025
huggingface.co

Total runs: 162
Run Growth: 162
Growth Rate: 100.00%
Updated:October 07 2025
huggingface.co

Total runs: 162
Run Growth: 86
Growth Rate: 53.09%
Updated:November 05 2025
huggingface.co

Total runs: 131
Run Growth: 13
Growth Rate: 9.92%
Updated:October 03 2025
huggingface.co

Total runs: 127
Run Growth: 127
Growth Rate: 100.00%
Updated:November 26 2025
huggingface.co

Total runs: 126
Run Growth: 126
Growth Rate: 100.00%
Updated:November 27 2025
huggingface.co

Total runs: 106
Run Growth: 50
Growth Rate: 47.17%
Updated:November 29 2025
huggingface.co

Total runs: 97
Run Growth: 97
Growth Rate: 100.00%
Updated:November 05 2025
huggingface.co

Total runs: 94
Run Growth: 94
Growth Rate: 100.00%
Updated:November 17 2025
huggingface.co

Total runs: 85
Run Growth: 31
Growth Rate: 36.47%
Updated:November 13 2025
huggingface.co

Total runs: 84
Run Growth: -80
Growth Rate: -95.24%
Updated:December 01 2025
huggingface.co

Total runs: 81
Run Growth: 81
Growth Rate: 100.00%
Updated:November 23 2025
huggingface.co

Total runs: 71
Run Growth: 71
Growth Rate: 100.00%
Updated:November 14 2025
huggingface.co

Total runs: 71
Run Growth: 71
Growth Rate: 100.00%
Updated:November 29 2025
huggingface.co

Total runs: 67
Run Growth: 67
Growth Rate: 100.00%
Updated:December 06 2025
huggingface.co

Total runs: 61
Run Growth: 16
Growth Rate: 26.23%
Updated:November 08 2025
huggingface.co

Total runs: 57
Run Growth: 57
Growth Rate: 100.00%
Updated:November 30 2025
huggingface.co

Total runs: 55
Run Growth: 5
Growth Rate: 9.09%
Updated:November 14 2025
huggingface.co

Total runs: 54
Run Growth: 5
Growth Rate: 9.26%
Updated:November 09 2025
huggingface.co

Total runs: 52
Run Growth: 52
Growth Rate: 100.00%
Updated:November 15 2025
huggingface.co

Total runs: 46
Run Growth: 46
Growth Rate: 100.00%
Updated:November 30 2025
huggingface.co

Total runs: 42
Run Growth: -5
Growth Rate: -11.90%
Updated:November 18 2025
huggingface.co

Total runs: 37
Run Growth: 37
Growth Rate: 100.00%
Updated:November 14 2025
huggingface.co

Total runs: 36
Run Growth: 5
Growth Rate: 13.89%
Updated:November 14 2025
huggingface.co

Total runs: 32
Run Growth: 3
Growth Rate: 9.38%
Updated:December 06 2025
huggingface.co

Total runs: 26
Run Growth: 26
Growth Rate: 100.00%
Updated:October 11 2025
huggingface.co

Total runs: 24
Run Growth: -29
Growth Rate: -120.83%
Updated:November 26 2025
huggingface.co

Total runs: 22
Run Growth: 5
Growth Rate: 22.73%
Updated:September 24 2025
huggingface.co

Total runs: 16
Run Growth: 3
Growth Rate: 18.75%
Updated:February 02 2026
huggingface.co

Total runs: 12
Run Growth: 6
Growth Rate: 50.00%
Updated:September 25 2025
huggingface.co

Total runs: 10
Run Growth: 0
Growth Rate: 0.00%
Updated:October 28 2025
huggingface.co

Total runs: 10
Run Growth: 2
Growth Rate: 20.00%
Updated:October 12 2025
huggingface.co

Total runs: 8
Run Growth: 2
Growth Rate: 25.00%
Updated:September 27 2025
huggingface.co

Total runs: 7
Run Growth: -36
Growth Rate: -514.29%
Updated:November 04 2025
huggingface.co

Total runs: 6
Run Growth: 0
Growth Rate: 0.00%
Updated:January 17 2026
huggingface.co

Total runs: 5
Run Growth: -136
Growth Rate: -2720.00%
Updated:October 18 2025
huggingface.co

Total runs: 5
Run Growth: 0
Growth Rate: 0.00%
Updated:January 20 2026
huggingface.co

Total runs: 4
Run Growth: 0
Growth Rate: 0.00%
Updated:December 19 2025
huggingface.co

Total runs: 4
Run Growth: -63
Growth Rate: -3150.00%
Updated:January 24 2026
huggingface.co

Total runs: 3
Run Growth: 3
Growth Rate: 100.00%
Updated:December 14 2025
huggingface.co

Total runs: 3
Run Growth: 2
Growth Rate: 66.67%
Updated:November 29 2025
huggingface.co

Total runs: 3
Run Growth: 0
Growth Rate: 0.00%
Updated:December 13 2025
huggingface.co

Total runs: 3
Run Growth: -9
Growth Rate: -300.00%
Updated:January 17 2026
huggingface.co

Total runs: 2
Run Growth: -6
Growth Rate: -300.00%
Updated:January 17 2026