Stable Audio Open Small
generates variable-length (up to 11s) stereo audio at 44.1kHz from text prompts. It comprises three components: an autoencoder that compresses waveforms into a manageable sequence length, a T5-based text embedding for text conditioning, and a transformer-based diffusion (DiT) model that operates in the latent space of the autoencoder.
To further optimize this model for maximum performance on Arm CPUs, you can follow the step by step guide to deployment via the
Arm Learning Path
.
Training dataset
Datasets Used
Our dataset consists of 486492 audio recordings, where 472618 are from Freesound and 13874 are from the Free Music Archive (FMA). All audio files are licensed under CC0, CC BY, or CC Sampling+. The Freesound and Free Music Archive datasets were both used to train the autoencoder. The DiT was trained solely on the Freesound dataset. We use a publicly available pre-trained T5 model (
t5-base
) for text conditioning.
Attribution
Attribution for all audio recordings used to train Stable Audio Open Small can be found on our
attribution page
.
Mitigations
We conducted an in-depth analysis to ensure no unauthorized copyrighted music was present in our training data before we began training.
To that end, we first identified music samples in Freesound using the
PANNs
music classifier based on AudioSet classes. The identified music samples had at least 30 seconds of music that was predicted to belong to a music-related class with a threshold of 0.15 (PANNs output probabilities range from 0 to 1). This threshold was determined by classifying known music examples from FMA and ensuring no false negatives were present.
The identified music samples were sent to Audible Magic’s identification services, a trusted content detection company, to ensure the absence of copyrighted music. Audible Magic flagged suspected copyrighted music, which we subsequently removed before training on the dataset. The majority of the removed content was field recordings in which copyrighted music was playing in the background. Following this procedure, we were left with 266324 CC0, 194840 CC-BY, and 11454 CC Sampling+ audio recordings.
We also conducted an in-depth analysis to ensure no copyrighted content was present in FMA's subset. In this case, the procedure was slightly different because the FMA subset consists of music signals. We did a metadata search against a large database of copyrighted music (
https://www.kaggle.com/datasets/maharshipandya/-spotify-tracks-dataset
) and flagged any potential match. The flagged content was reviewed individually by humans. After this process, we ended up with 8967 CC-BY and 4907 CC0 tracks.
Use and Limitations
Intended Use
The primary use of Stable Audio Open Small is research and experimentation on AI-based music and audio generation, including:
Research efforts to better understand the limitations of generative models and further improve the state of science.
Generation of music and audio guided by text to explore current abilities of generative AI models by machine learning practitioners and artists.
Out-of-Scope Use Cases
The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate audio or music pieces that create hostile or alienating environments for people.
Limitations
The model is not able to generate realistic vocals.
The model has been trained with English descriptions and will not perform as well in other languages.
The model does not perform equally well for all music styles and cultures.
The model is better at generating sound effects and field recordings than music.
It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
Biases
The source of data is potentially lacking diversity and all cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres and sound effects that exist. The generated samples from the model will reflect the biases from the training data.
Runs of stabilityai stable-audio-open-small on huggingface.co
4.8K
Total runs
0
24-hour runs
123
3-day runs
1.2K
7-day runs
689
30-day runs
More Information About stable-audio-open-small huggingface.co Model
stable-audio-open-small huggingface.co is an AI model on huggingface.co that provides stable-audio-open-small's model effect (), which can be used instantly with this stabilityai stable-audio-open-small model. huggingface.co supports a free trial of the stable-audio-open-small model, and also provides paid use of the stable-audio-open-small. Support call stable-audio-open-small model through api, including Node.js, Python, http.
stable-audio-open-small huggingface.co is an online trial and call api platform, which integrates stable-audio-open-small's modeling effects, including api services, and provides a free online trial of stable-audio-open-small, you can try stable-audio-open-small online for free by clicking the link below.
stabilityai stable-audio-open-small online free url in huggingface.co:
stable-audio-open-small is an open source model from GitHub that offers a free installation service, and any user can find stable-audio-open-small on GitHub to install. At the same time, huggingface.co provides the effect of stable-audio-open-small install, users can directly use stable-audio-open-small installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
stable-audio-open-small install url in huggingface.co: