Be part of our every day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Study Extra
A two-person startup by the title of Nari Labs has launched Dia, a 1.6 billion parameter text-to-speech (TTS) mannequin designed to provide naturalistic dialogue immediately from textual content prompts — and considered one of its creators claims it surpasses the efficiency of competing proprietary choices from the likes of ElevenLabs, Google’s hit NotebookLM AI podcast era product.
It might additionally threaten uptake of OpenAI’s latest gpt-4o-mini-tts.
“Dia rivals NotebookLM’s podcast characteristic whereas surpassing ElevenLabs Studio and Sesame’s open mannequin in high quality,” mentioned Toby Kim, one of many co-creators of Nari and Dia, on a submit from his account on the social community X.
In a separate submit, Kim famous that the mannequin was constructed with “zero funding,” and added throughout a thread: “…we weren’t AI consultants from the start. It began after we fell in love with NotebookLM’s podcast characteristic when it was launched final 12 months. We wished extra—extra management over the voices, extra freedom within the script. We tried each TTS API available on the market. None of them seemed like actual human dialog.”
Kim additional credited Google for giving him and his collaborator entry to the corporate’s Tensor Processing Unit chips (TPUs) for coaching Dia by means of Google’s Analysis Cloud.
Dia’s code and weights — the inner mannequin connection set — is now accessible for obtain and native deployment by anybody from Hugging Face or Github. Particular person customers can strive producing speech from it on a Hugging Face House.
Superior controls and extra customizable options
Dia helps nuanced options like emotional tone, speaker tagging, and nonverbal audio cues—all from plain textual content.
Customers can mark speaker turns with tags like [S1] and [S2], and embrace cues like (laughs), (coughs), or (clears throat) to counterpoint the ensuing dialogue with nonverbal behaviors.
These tags are accurately interpreted by Dia throughout era—one thing not reliably supported by different accessible fashions, based on the corporate’s examples web page.
The mannequin is at present English-only and never tied to any single speaker’s voice, producing totally different voices per run except customers repair the era seed or present an audio immediate. Audio conditioning, or voice cloning, lets customers information speech tone and voice likeness by importing a pattern clip.
Nari Labs provides instance code to facilitate this course of and a Gradio-based demo so customers can strive it with out setup.
Comparability with ElevenLabs and Sesame
Nari provides a bunch of instance audio information generated by Dia on its Notion web site, evaluating it to different main speech-to-text rivals, particularly ElevenLabs Studio and Sesame CSM-1B, the latter a brand new text-to-speech mannequin from Oculus VR headset co-creator Brendan Iribe that went considerably viral on X earlier this 12 months.
Aspect-by-side examples shared by Nari Labs present how Dia outperforms the competitors in a number of areas:
In customary dialogue eventualities, Dia handles each pure timing and nonverbal expressions higher. For instance, in a script ending with (laughs), Dia interprets and delivers precise laughter, whereas ElevenLabs and Sesame output textual substitutions like “haha”.
For instance, right here’s Dia…
…and the identical sentence spoken by ElevenLabs Studio
In multi-turn conversations with emotional vary, Dia demonstrates smoother transitions and tone shifts. One check included a dramatic, emotionally-charged emergency scene. Dia rendered the urgency and speaker stress successfully, whereas competing fashions usually flattened supply or misplaced pacing.
Dia uniquely handles nonverbal-only scripts, reminiscent of a humorous trade involving coughs, sniffs, and laughs. Competing fashions failed to acknowledge these tags or skipped them fully.
Even with rhythmically advanced content material like rap lyrics, Dia generates fluid, performance-style speech that maintains tempo. This contrasts with extra monotone or disjointed outputs from ElevenLabs and Sesame’s 1B mannequin.
Utilizing audio prompts, Dia can lengthen or proceed a speaker’s voice model into new strains. An instance utilizing a conversational clip as a seed confirmed how Dia carried vocal traits from the pattern by means of the remainder of the scripted dialogue. This characteristic isn’t robustly supported in different fashions.
In a single set of exams, Nari Labs famous that Sesame’s greatest web site demo possible used an inside 8B model of the mannequin quite than the general public 1B checkpoint, leading to a niche between marketed and precise efficiency.
Mannequin entry and tech specs
Builders can entry Dia from Nari Labs’ GitHub repository and its Hugging Face mannequin web page.
The mannequin runs on PyTorch 2.0+ and CUDA 12.6 and requires about 10GB of VRAM.
Inference on enterprise-grade GPUs just like the NVIDIA A4000 delivers roughly 40 tokens per second.
Whereas the present model solely runs on GPU, Nari plans to supply CPU assist and a quantized launch to enhance accessibility.
The startup provides each a Python library and CLI device to additional streamline deployment.
Dia’s flexibility opens use instances from content material creation to assistive applied sciences and artificial voiceovers.
Nari Labs can also be creating a client model of Dia aimed toward informal customers trying to remix or share generated conversations. customers can sing up by way of e-mail to a waitlist for early entry.
Totally open supply
The mannequin is distributed below a totally open supply Apache 2.0 license, which suggests it may be used for business functions — one thing that may clearly attraction to enterprises or indie app builders.
Nari Labs explicitly prohibits utilization that features impersonating people, spreading misinformation, or partaking in unlawful actions. The crew encourages accountable experimentation and has taken a stance in opposition to unethical deployment.
Dia’s growth credit assist from the Google TPU Analysis Cloud, Hugging Face’s ZeroGPU grant program, and prior work on SoundStorm, Parakeet, and Descript Audio Codec.
Nari Labs itself contains simply two engineers—one full-time and one part-time—however they actively invite neighborhood contributions by means of its Discord server and GitHub.
With a transparent deal with expressive high quality, reproducibility, and open entry, Dia provides a particular new voice to the panorama of generative speech fashions.