NeuroBERT-Tiny
is a
super lightweight
NLP model derived from
google/bert-base-uncased
, optimized for
real-time inference
on
edge and IoT devices
. With a quantized size of
~15MB
and
~4M parameters
, it delivers efficient contextual language understanding for resource-constrained environments like mobile apps, wearables, microcontrollers, and smart home devices. Designed for
low-latency
and
offline operation
, it’s perfect for privacy-first applications with limited connectivity.
from transformers import AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("boltuix/NeuroBERT-Tiny")
tokenizer = AutoTokenizer.from_pretrained("boltuix/NeuroBERT-Tiny")
Manual Download
:
Download quantized model weights from the Hugging Face model hub.
Extract and integrate into your edge/IoT application.
Quickstart: Masked Language Modeling
Predict missing words in IoT-related sentences with masked language modeling:
from transformers import pipeline
# Unleash the power
mlm_pipeline = pipeline("fill-mask", model="boltuix/NeuroBERT-Mini")
from transformers import pipeline
# Unleash the power
mlm_pipeline = pipeline("fill-mask", model="boltuix/NeuroBERT-Mini")
# Test the magic
result = mlm_pipeline("Please [MASK] the door before leaving.")
print(result[0]["sequence"]) # Output: "Please open the door before leaving."
Quickstart: Text Classification
Perform intent detection or text classification for IoT commands:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# 🧠 Load tokenizer and classification model
model_name = "boltuix/NeuroBERT-Tiny"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()
# 🧪 Example input
text = "Turn off the fan"# ✂️ Tokenize the input
inputs = tokenizer(text, return_tensors="pt")
# 🔍 Get predictionwith torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=1)
pred = torch.argmax(probs, dim=1).item()
# 🏷️ Define labels
labels = ["OFF", "ON"]
# ✅ Print resultprint(f"Text: {text}")
print(f"Predicted intent: {labels[pred]} (Confidence: {probs[0][pred]:.4f})")
Text: Turn off the FAN
Predicted intent: OFF (Confidence: 0.5328)
Note
: Fine-tune the model for specific classification tasks to improve accuracy.
Evaluation
NeuroBERT-Tiny was evaluated on a masked language modeling task using 10 IoT-related sentences. The model predicts the top-5 tokens for each masked word, and a test passes if the expected word is in the top-5 predictions.
Test Sentences
Sentence
Expected Word
She is a [MASK] at the local hospital.
nurse
Please [MASK] the door before leaving.
shut
The drone collects data using onboard [MASK].
sensors
The fan will turn [MASK] when the room is empty.
off
Turn [MASK] the coffee machine at 7 AM.
on
The hallway light switches on during the [MASK].
night
The air purifier turns on due to poor [MASK] quality.
air
The AC will not run if the door is [MASK].
open
Turn off the lights after [MASK] minutes.
five
The music pauses when someone [MASK] the room.
enters
Evaluation Code
from transformers import AutoTokenizer, AutoModelForMaskedLM
import torch
# 🧠 Load model and tokenizer
model_name = "boltuix/NeuroBERT-Tiny"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
model.eval()
# 🧪 Test data
tests = [
("She is a [MASK] at the local hospital.", "nurse"),
("Please [MASK] the door before leaving.", "shut"),
("The drone collects data using onboard [MASK].", "sensors"),
("The fan will turn [MASK] when the room is empty.", "off"),
("Turn [MASK] the coffee machine at 7 AM.", "on"),
("The hallway light switches on during the [MASK].", "night"),
("The air purifier turns on due to poor [MASK] quality.", "air"),
("The AC will not run if the door is [MASK].", "open"),
("Turn off the lights after [MASK] minutes.", "five"),
("The music pauses when someone [MASK] the room.", "enters")
]
results = []
# 🔁 Run testsfor text, answer in tests:
inputs = tokenizer(text, return_tensors="pt")
mask_pos = (inputs.input_ids == tokenizer.mask_token_id).nonzero(as_tuple=True)[1]
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits[0, mask_pos, :]
topk = logits.topk(5, dim=1)
top_ids = topk.indices[0]
top_scores = torch.softmax(topk.values, dim=1)[0]
guesses = [(tokenizer.decode([i]).strip().lower(), float(score)) for i, score inzip(top_ids, top_scores)]
results.append({
"sentence": text,
"expected": answer,
"predictions": guesses,
"pass": answer.lower() in [g[0] for g in guesses]
})
# 🖨️ Print resultsfor r in results:
status = "✅ PASS"if r["pass"] else"❌ FAIL"print(f"\n🔍 {r['sentence']}")
print(f"🎯 Expected: {r['expected']}")
print("🔝 Top-5 Predictions (word : confidence):")
for word, score in r['predictions']:
print(f" - {word:12} | {score:.4f}")
print(status)
# 📊 Summary
pass_count = sum(r["pass"] for r in results)
print(f"\n🎯 Total Passed: {pass_count}/{len(tests)}")
Sample Results (Hypothetical)
Sentence
: She is a [MASK] at the local hospital.
Expected
: nurse
Top-5
: [doctor (0.35), nurse (0.30), surgeon (0.20), technician (0.10), assistant (0.05)]
Result
: ✅ PASS
Sentence
: Turn off the lights after [MASK] minutes.
Expected
: five
Top-5
: [ten (0.40), two (0.25), three (0.20), fifteen (0.10), twenty (0.05)]
Result
: ❌ FAIL
Total Passed
: ~8/10 (depends on fine-tuning).
The model excels in IoT contexts (e.g., “sensors,” “off,” “open”) but may require fine-tuning for numerical terms like “five.”
Evaluation Metrics
Metric
Value (Approx.)
✅ Accuracy
~90–95% of BERT-base
🎯 F1 Score
Balanced for MLM/NER tasks
⚡ Latency
<50ms on Raspberry Pi
📏 Recall
Competitive for lightweight models
Note
: Metrics vary based on hardware (e.g., Raspberry Pi 4, Android devices) and fine-tuning. Test on your target device for accurate results.
Use Cases
NeuroBERT-Tiny is designed for
edge and IoT scenarios
with limited compute and connectivity. Key applications include:
Smart Home Devices
: Parse commands like “Turn [MASK] the coffee machine” (predicts “on”) or “The fan will turn [MASK]” (predicts “off”).
IoT Sensors
: Interpret sensor contexts, e.g., “The drone collects data using onboard [MASK]” (predicts “sensors”).
Wearables
: Real-time intent detection, e.g., “The music pauses when someone [MASK] the room” (predicts “enters”).
Mobile Apps
: Offline chatbots or semantic search, e.g., “She is a [MASK] at the hospital” (predicts “nurse”).
Voice Assistants
: Local command parsing, e.g., “Please [MASK] the door” (predicts “shut”).
Toy Robotics
: Lightweight command understanding for interactive toys.
Fitness Trackers
: Local text feedback processing, e.g., sentiment analysis.
Car Assistants
: Offline command disambiguation without cloud APIs.
Hardware Requirements
Processors
: CPUs, mobile NPUs, or microcontrollers (e.g., ESP32, Raspberry Pi)
Storage
: ~15MB for model weights (quantized for reduced footprint)
Memory
: ~50MB RAM for inference
Environment
: Offline or low-connectivity settings
Quantization ensures minimal memory usage, making it ideal for microcontrollers.
Trained On
Custom IoT Dataset
: Curated data focused on IoT terminology, smart home commands, and sensor-related contexts (sourced from chatgpt-datasets). This enhances performance on tasks like command parsing and device control.
Fine-tuning on domain-specific data is recommended for optimal results.
Fine-Tuning Guide
To adapt NeuroBERT-Tiny for custom IoT tasks (e.g., specific smart home commands):
Prepare Dataset
: Collect labeled data (e.g., commands with intents or masked sentences).
Fine-Tune with Hugging Face
:
#!pip uninstall -y transformers torch datasets#!pip install transformers==4.44.2 torch==2.4.1 datasets==3.0.1import torch
from transformers import BertTokenizer, BertForSequenceClassification, Trainer, TrainingArguments
from datasets import Dataset
import pandas as pd
# 1. Prepare the sample IoT dataset
data = {
"text": [
"Turn on the fan",
"Switch off the light",
"Invalid command",
"Activate the air conditioner",
"Turn off the heater",
"Gibberish input"
],
"label": [1, 1, 0, 1, 1, 0] # 1 for valid IoT commands, 0 for invalid
}
df = pd.DataFrame(data)
dataset = Dataset.from_pandas(df)
# 2. Load tokenizer and model
model_name = "boltuix/NeuroBERT-Tiny"# Using NeuroBERT-Tiny
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2)
# 3. Tokenize the datasetdeftokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True, max_length=64) # Short max_length for IoT commands
tokenized_dataset = dataset.map(tokenize_function, batched=True)
# 4. Set format for PyTorch
tokenized_dataset.set_format("torch", columns=["input_ids", "attention_mask", "label"])
# 5. Define training arguments
training_args = TrainingArguments(
output_dir="./iot_neurobert_results",
num_train_epochs=5, # Increased epochs for small dataset
per_device_train_batch_size=2,
logging_dir="./iot_neurobert_logs",
logging_steps=10,
save_steps=100,
evaluation_strategy="no",
learning_rate=3e-5, # Adjusted for NeuroBERT-Tiny
)
# 6. Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
)
# 7. Fine-tune the model
trainer.train()
# 8. Save the fine-tuned model
model.save_pretrained("./fine_tuned_neurobert_iot")
tokenizer.save_pretrained("./fine_tuned_neurobert_iot")
# 9. Example inference
text = "Turn on the light"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=64)
model.eval()
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class = torch.argmax(logits, dim=1).item()
print(f"Predicted class for '{text}': {'Valid IoT Command'if predicted_class == 1else'Invalid Command'}")
Deploy
: Export the fine-tuned model to ONNX or TensorFlow Lite for edge devices.
Comparison to Other Models
Model
Parameters
Size
Edge/IoT Focus
Tasks Supported
NeuroBERT-Tiny
~4M
~15MB
High
MLM, NER, Classification
DistilBERT
~66M
~200MB
Moderate
MLM, NER, Classification
TinyBERT
~14M
~50MB
Moderate
MLM, Classification
NeuroBERT-Tiny’s IoT-optimized training and quantization make it more suitable for microcontrollers than larger models like DistilBERT.
NeuroBERT-Tiny huggingface.co is an AI model on huggingface.co that provides NeuroBERT-Tiny's model effect (), which can be used instantly with this boltuix NeuroBERT-Tiny model. huggingface.co supports a free trial of the NeuroBERT-Tiny model, and also provides paid use of the NeuroBERT-Tiny. Support call NeuroBERT-Tiny model through api, including Node.js, Python, http.
NeuroBERT-Tiny huggingface.co is an online trial and call api platform, which integrates NeuroBERT-Tiny's modeling effects, including api services, and provides a free online trial of NeuroBERT-Tiny, you can try NeuroBERT-Tiny online for free by clicking the link below.
boltuix NeuroBERT-Tiny online free url in huggingface.co:
NeuroBERT-Tiny is an open source model from GitHub that offers a free installation service, and any user can find NeuroBERT-Tiny on GitHub to install. At the same time, huggingface.co provides the effect of NeuroBERT-Tiny install, users can directly use NeuroBERT-Tiny installed effect in huggingface.co for debugging and trial. It also supports api for free installation.