Trained on
my
json-training
dataset
,
these are finetunes of the smallest state-of-the-art LLMs to output in structured JSON.
Where their base/instruct versions have so little clue how to output JSON that forcing it using
techniques like grammars simply hangs forever, these little guys (mostly) work like a charm.
(SmolLM 135M still sometimes babbles on. Set a maximum token limit.)
Training was done with Unsloth at 4bit (lmao), rank=8, alpha=8, for 3 epochs each.
rev1
models were trained on the first revision (11.6k rows) of
json-training
,
while
rev2
models were trained on the second (20.6k rows).
Runs of ChristianAzinn tiny-json on huggingface.co
24
Total runs
0
24-hour runs
4
3-day runs
7
7-day runs
-13
30-day runs
More Information About tiny-json huggingface.co Model
tiny-json huggingface.co is an AI model on huggingface.co that provides tiny-json's model effect (), which can be used instantly with this ChristianAzinn tiny-json model. huggingface.co supports a free trial of the tiny-json model, and also provides paid use of the tiny-json. Support call tiny-json model through api, including Node.js, Python, http.
tiny-json huggingface.co is an online trial and call api platform, which integrates tiny-json's modeling effects, including api services, and provides a free online trial of tiny-json, you can try tiny-json online for free by clicking the link below.
ChristianAzinn tiny-json online free url in huggingface.co:
tiny-json is an open source model from GitHub that offers a free installation service, and any user can find tiny-json on GitHub to install. At the same time, huggingface.co provides the effect of tiny-json install, users can directly use tiny-json installed effect in huggingface.co for debugging and trial. It also supports api for free installation.