Distyl AI partners with blue-chip leaders to help them create the enterprises of the future. The company’s technology, talent, and research have driven outcomes for the world’s greatest companies, including leaders in telecommunications, healthcare, and financial services. Their proprietary platform, Distillery, curates context from across the enterprise and adapts to each customer’s needs, prioritizing the outcomes customers need to show impact in months, not years. Headquartered in San Francisco, Distyl has reached more than 120 million end users and delivered hundreds of millions in operating impact across industries. Distyl is backed by Lightspeed Venture Partners, Khosla Ventures, DST Global, Coatue, and Dell Technologies Capital.
Distyl AI partners with blue-chip leaders to help them create the enterprises of the future. The company’s technology, talent, and research have driven outcomes for the world’s greatest companies, including leaders in telecommunications, healthcare, and financial services. Their proprietary platform, Distillery, curates context from across the enterprise and adapts to each customer’s needs, prioritizing the outcomes customers need to show impact in months, not years. Headquartered in San Francisco, Distyl has reached more than 120 million end users and delivered hundreds of millions in operating impact across industries. Distyl is backed by Lightspeed Venture Partners, Khosla Ventures, DST Global, Coatue, and Dell Technologies Capital.
Headquarters: San Francisco (also lists New York presence)
Product: Distillery — proprietary platform to generate, evaluate, and deploy AI-native workflows
Founders / Leadership: Arjun Prakash (Co‑Founder & CEO) and Derek Ho (Co‑Founder & COO)
Funding: Reported total funding ~ $202M; Series B reported $175M
Customers / Impact: Works with large enterprises across telecom, healthcare, financial services; reported >120M end users reached
Company Overview
Problem Domain
Enterprise operations and AI systems integration across industries (telecom, healthcare, manufacturing, insurance, retail, financial services).
Founded
2022
Industry
Enterprise AI / AI systems
Funding Track Record
Series B
175000000
Reported post‑money valuation $1.8B in coverage
Investor Signal
“Backed by Lightspeed Venture Partners, Khosla Ventures, DST Global, Coatue, and Dell Technologies Capital”
Founders
What we do
Related Companies
Company
HQ
Industry
Total Funding
Giga
🇺🇸US
—
-
Writer
🇺🇸US
Data and AnalyticsDeepTechInformation TechnologyMediaSoftware
$326M
FurtherAI
🇺🇸US
Data and AnalyticsDeepTechFinanceInformation TechnologySoftware
-
Adaptive ML
🇫🇷FR
Data and AnalyticsDeepTechEducationHR and RecruitingInformation TechnologySoftware
-
Reevo
🇺🇸US
Data and AnalyticsInformation TechnologySoftware
-
Join the Team
New Graduate AI Engineer
On-SiteSan Francisco Bay Area, New York, US
On-Site • San Francisco Bay Area, New York, US
Who you are
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
Frontend Developer
ContractNovi Sad, RS
Contract • Novi Sad, RS
Data Scientist
Full-timeMunich, DE
Full-time • Munich, DE
Backend Developer
Part-timeNovi Sad, RS
Part-time • Novi Sad, RS
Frontend Developer
Full-timeHaifa
Full-time • Haifa
Mobile Developer
Part-timeCambridge, GB
Part-time • Cambridge, GB
AI Researcher
ContractNovi Sad, RS
Contract • Novi Sad, RS
We’re hiring early-career / new-graduate AI Engineers who are excited to build real-world AI systems and grow quickly in a fast-paced, customer-facing environment
This role is ideal for candidates who have strong technical fundamentals, a passion for AI, and a desire to learn how cutting-edge GenAI systems are built and deployed in enterprise settings
We’re excited to meet candidates who are curious, motivated, and eager to learn
A degree in Computer Science, Engineering, AI/ML, or a related field
Proficiency in Python or TypeScript, with strong software engineering fundamentals
Academic, internship, or project experience with LLMs or GenAI systems, such as:
Prompting and prompt experimentation
RAG pipelines
Simple AI agents or tool-using systems
Familiarity with LLM tooling or frameworks (e.g., LangChain, LlamaIndex, Agents SDK, or similar), or strong interest in learning them
Experience building projects end-to-end (school projects, internships, hackathons, or open-source)
Comfort working in collaborative environments and learning from feedback
Basic familiarity with cloud platforms (AWS, GCP, or Azure), Docker, or CI/CD is a plus, but not required
What the job involves
As a new grad at Distyl, you’ll be a hands-on contributor from day one, working alongside experienced engineers to design, implement, and deploy production-grade AI systems powered by Large Language Models (LLMs)
This role offers structured exposure to both technical engineering work and real-world business applications of AI
You’ll begin by learning through hands-on development and shadowing customer work, and over time grow into greater ownership of technical decisions and solution design
Design, implement, and deploy GenAI applications under the guidance of senior engineers
Contribute to prompt design, agent logic, retrieval-augmented generation (RAG), and model evaluation
Help build full-stack AI applications that deliver measurable business value
Gain exposure to customer-facing work by initially shadowing technical conversations and learning how business needs are translated into system design, with opportunities over time to take on more responsibility in technical decision-making and implementation
Partner with senior engineers to understand customer problems and translate requirements into technical solutions
Participate in customer discussions, solution design sessions, and iterative delivery
Help improve Distillery, Distyl’s internal LLM application platform, by building reusable components, tools, and workflows
Learn best practices for scalable, maintainable AI infrastructure
Write clean, well-tested, and observable code that meets reliability, performance, and security standards
Learn how production AI systems are monitored, debugged, and improved over time
Assist with evaluating AI systems across accuracy, latency, cost, and robustness
Apply feedback from users and metrics to improve system performance
Continuously develop your skills in LLMs, software engineering, and AI through mentorship, code reviews, and hands-on project work
Learn modern development workflows and deployment practices used in enterprise AI
In your first year, you’ll:
Ship production AI features used by real customers
Develop a strong foundation in LLM systems and AI engineering best practices
Grow into owning meaningful components of customer-facing AI systems
Build confidence working across the full lifecycle of AI development—from problem definition to deployment
Gain structured customer exposure—starting by shadowing technical discussions and learning how business requirements map to technical solutions, and gradually taking on more ownership in both technical execution and customer-facing conversations
Mentorship from experienced AI engineers through hands-on project work, code reviews, and design discussions
The application process
Due to the high volume of interest, we’re not able to review or respond to all direct emails or LinkedIn messages
We will be in touch with every applicant once we’ve completed our review, regardless of the decision