Research Scientist - Voice AI FoundationsCompany OverviewDeepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram.The OpportunityVoice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.The RoleYou will pioneer the development of Latent Space Models (LSMs), a new approach that aims to solve the fundamental data, scale, and cost challenges associated with building robust, contextualized voice AI. Your research will focus on solving one or more of the following problems:Build next-generation neural audio codecs that achieve extreme, low bit-rate compression and high fidelity reconstruction across a world-scale corpus of general audio.Pioneer steerable generative models that can synthesize the full diversity of human speech from the codec latent representation, from casual conversation to highly emotional expression to complex multi-speaker scenarios with environmental noise and overlapping speech.Develop embedding systems that cleanly factorize the codec latent space into interpretable dimensions of speaker, content, style, environment, and channel effects enabling precise control over each aspect and the ability to massively amplify an existing seed dataset through “latent recombination”.Leverage latent recombination to generate synthetic audio data at previously impossible scales, unlocking joint model and data scaling paradigms for audio.Design model architectures, training schemes, and inference algorithms that are adapted for hardware at the bare metal enabling cost efficient training on billion-hour datasets and powering real-time inference for hundreds of millions of concurrent conversations.The ChallengeWe are seeking researchers who:See "unsolved" problems as opportunities to pioneer entirely new approaches.Can identify the one critical experiment that will validate or kill an idea in days, not months.Have the vision to scale successful proofs-of-concept 100x.Are obsessed with using AI to automate and amplify your own impact.If you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative.It's Important to Us That You Have
Strong mathematical foundation in statistical learning theory, particularly in areas relevant to self-supervised and multimodal learning.Deep expertise in foundation model architectures, with an understanding of how to scale training across multiple modalities.Proven ability to bridge theory and practice—someone who can both derive novel mathematical formulations and implement them efficiently.Demonstrated ability to build data pipelines that can process and curate massive datasets while maintaining quality and diversity.Track record of designing controlled experiments that isolate the impact of architectural innovations and validate theoretical insights.Experience optimizing models for real-world deployment, including knowledge of hardware constraints and efficiency techniques.History of open-source contributions or research publications that have advanced the state of the art in speech/language AI.SummaryOverall, an ideal researcher in deep learning consistently demonstrates:A solid grounding in theoretical and statistical principles.A talent for proposing and validating new algorithmic solutions.The capacity to orchestrate data pipelines that scale and reflect real-world diversity.Awareness of hardware constraints and system-level trade-offs for efficiency.Thorough and transparent experimental practices.Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
#J-18808-Ljbffr