
Together AI is a research-driven artificial intelligence company. We contribute leading open-source research, models, and datasets to advance the frontier of AI. Our decentralized cloud services empower developers and researchers at organizations of all sizes to train, fine-tune, and deploy generative AI models. We believe open and transparent AI systems will drive innovation and create the best outcomes for society.

Together AI is a research-driven artificial intelligence company. We contribute leading open-source research, models, and datasets to advance the frontier of AI. Our decentralized cloud services empower developers and researchers at organizations of all sizes to train, fine-tune, and deploy generative AI models. We believe open and transparent AI systems will drive innovation and create the best outcomes for society.
Founded: June 2022
Headquarters: San Francisco, CA
Core product: AI-native cloud for training, fine-tuning, and inference on optimized GPU clusters
Total disclosed funding: USD 533,500,000
Employee count: 329
| Company |
|---|
AI infrastructure and platform for training, fine-tuning, and inference of large generative models
2022
Software Development
USD 20,000,000
USD 102,500,000
USD 305,000,000
Valuation reported at $3.3 billion
USD 106,000,000
“Participation from strategic investors including NVIDIA and Salesforce Ventures; multiple tier-one VCs across rounds”
About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancements such as FlashAttention, Mamba, FlexGen, SWARM Parallelism, Mixture of Agents, and RedPajama.
Role Overview The Turbo Research team investigates how to make post-training and reinforcement learning for large language models efficient, scalable, and reliable . Our work sits at the intersection of RL algorithms , inference systems , and large-scale experimentation , where the cost and structure of inference dominate overall training efficiency and shape what learning algorithms are practical.
As a research intern, you will study RL and post-training methods whose performance and scalability are tightly coupled to inference behavior , co-designing algorithms and systems rather than treating them independently. Projects aim to unlock new regimes of experimentation—larger models, longer rollouts, and more complex evaluations—by rethinking how inference, scheduling, and training interact.
Requirements We’re looking for research interns who want to work on foundational questions in RL and post-training , grounded in realistic inference systems.
You Might Be a Strong Fit If You
Example Research Directions
Intern projects are tailored to your background and interests, and may include:
Preferred Qualifications
Application Process Please Submit Your Application With
Internship Program Details Our summer internship program spans over 12 weeks where you’ll have the opportunity to work with industry-leading engineers building a cloud from the ground up and possibly contribute to influential open source projects. Our internship dates are May 18th to August 7th or June 15th to September 4th.
Compensation We offer competitive compensation, housing stipends, and other competitive benefits. The estimated US hourly rate for this role is $58-63/hr. Our hourly rates are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
Are pursuing a PhD or MS in Computer Science, EE, or a related field (exceptional undergraduates considered).
Have research experience in one or more of:
RL or post-training for large models (e.g., RLHF, RLAIF, GRPO, preference optimization).
ML systems (inference engines, runtimes, distributed systems).
Large-scale empirical ML research or evaluation.
Are comfortable with empirical research:
Designing controlled experiments and ablations.
Interpreting noisy results and drawing principled conclusions.
Can work across abstraction layers:
Strong Python skills for experimentation.
Willingness to modify inference or training systems (experience with C++, CUDA, or similar is a plus).
Care about research insight, not just benchmarks:
You ask why methods work or fail under real system constraints.
You think about how infrastructure assumptions shape algorithmic outcomes.
Inference-Aware RL & Post-Training
Designing RL or preference-optimization objectives that explicitly account for inference cost and structure (e.g., speculative decoding, partial rollouts, controllable sampling).
Studying how inference-time approximations affect learning dynamics in GRPO-, RLHF-, RLAIF-, or DPO-style methods.
Analyzing bias, variance, and stability trade-offs introduced by accelerated inference within RL loops.
RL-Centric Inference Systems
Developing inference mechanisms that support deterministic, reproducible RL rollouts at scale.
Exploring batching, scheduling, and memory-management strategies optimized for RL and evaluation workloads rather than pure serving.
Investigating how KV-cache policies, sampling controls, or runtime abstractions influence learning efficiency.
Scaling Laws & Cost–Quality Trade-offs
Empirically characterizing how reward improvement and generalization scale with rollout cost, latency, and throughput.
Quantifying when systems-level optimizations change algorithmic behavior rather than only reducing runtime.
Identifying regimes where inference efficiency unlocks qualitatively new learning capabilities.
Evaluation & Measurement
Designing rigorous benchmarks and diagnostics for post-training and RL efficiency.
Studying failure modes in long-horizon training and how system constraints shape outcomes.