
Evaluate, tune, and serve the best LLMs for your business. If you can measure it, reinforcement learning can optimize it.

Evaluate, tune, and serve the best LLMs for your business. If you can measure it, reinforcement learning can optimize it.
What they do: Reinforcement-learning platform (Adaptive Engine) for tuning, evaluating, A/B testing, and serving specialized language models
Customers / partners: Reported deployments with AT&T, SK Telecom, and Manulife
Funding: Series A of $20M closed Mar 11, 2024 (lead: Index Ventures)
Locations: Offices in New York and Paris; hybrid work
Team size / founding: Around 32 employees; six-person founding team
| Company |
|---|
Improving production performance of generative AI/LLMs via preference tuning, RLHF, and automated experimentation.
2023
Enterprise AI / Generative AI
$20,000,000
Round reported to include participation from multiple investors (total of seven investors).
“Index Ventures lead; participation reported from ICONIQ Capital, Motier Ventures, Databricks Ventures, Financiere Saint James, and investors associated with Factorial”
Adaptive ML is building a reinforcement learning platform to tune, evaluate, and serve specialized language models. We are pioneering the development of task-specific LLMs using synthetic data, creating the foundational tools and products needed for models to self-critique and self-improve based on simple guidelines.
Our Technical Staff develops the foundational technology that powers Adaptive ML in alignment with requests and requirements from our Commercial and Product teams. We are committed to building robust, efficient technology and conducting at-scale, impactful research to drive our roadmap and deliver value to our customers.
Job Title: GPU Performance Engineer
Employment Type: Full-time
Department: Technical Staff
Location: Paris or New York (in-person role)
As a GPU Performance Engineer, you will help ensure that our LLM stack (Adaptive Harmony) delivers state-of-the-art performance across a variety of settings, from latency-bound regimes to throughput-bound regimes during training and offline inference. You will contribute to building the foundational technology powering Adaptive ML by delivering performance improvements directly to our clients and internal workloads.