
Sepal AI empowers ambitious AI teams to build faster and safer foundational AI models by providing expert-grounded data at every stage of development. They focus on accelerating model iteration and ensuring quality through rigorous vetting and a vast pool of specialized PhD-level experts. Their services include providing novel training datasets, custom benchmarks, pre-launch testing, and agentic applications, all supported by a human-centric approach to AI development. Sepal AI is SOC 2 Certified and is relied upon by top AI labs that prioritize responsible and high-impact AI.

Sepal AI empowers ambitious AI teams to build faster and safer foundational AI models by providing expert-grounded data at every stage of development. They focus on accelerating model iteration and ensuring quality through rigorous vetting and a vast pool of specialized PhD-level experts. Their services include providing novel training datasets, custom benchmarks, pre-launch testing, and agentic applications, all supported by a human-centric approach to AI development. Sepal AI is SOC 2 Certified and is relied upon by top AI labs that prioritize responsible and high-impact AI.
What they do: Expert-grounded training data, RL environments, evaluation tooling, and model fine-tuning for frontier AI labs and enterprises
Headquarters: San Francisco, California, United States
Founders: Fedor Paretsky; Kat Hu
Stage / funding: Pre-Seed; raised a round announced Sep 25, 2024
Security / compliance: SOC 2 Certified
Training data quality, model evaluation, and safety for large and frontier AI models
2024
Data and Analytics
500000 USD
“Y Combinator (lead/backing investor)”
| Company |
|---|
Note you must apply at this link in order to be considered: https://expert-hub.sepalai.com/application?applicationFormId=a3e099bb-4b46-4c57-b46c-20e51ca12bf7&outreachCampaignId=3c4a0b58-05b7-406f-8da8-6e9f7a3f90af Sepal AI partners with top AI research labs (OpenAI, Anthropic, Google DeepMind) to determine how dangerous AI models are. We need hackers, red-teamers, and CTF veterans to work with the newest LLM coding models to understand what they're capable of and to hack our CTFs. 🧠 What You'll Do - Design adversarial scenarios that probe AI assistants for injection, privilege-escalation, and data-exfiltration risks. - Execute red-team engagements against AI-enabled workflows (web, cloud, and SaaS). - Craft realistic exploit chains and payloads a real attacker might use, then measure whether the model blocks or facilitates the attack. - Build scoring rubrics, attack trees, and reproducible test harnesses to grade model resilience. - Collaborate with AI researchers to iterate on defenses, patch vulnerabilities, and improve sandboxing & policy enforcement. ✅ Who You Are - OSCP (or comparable) certification and 3+ years in professional red teaming or offensive security. - Proven CTF track record; comfortable with web, cloud, and application exploits. - Fluent in Python/Bash tooling, custom payload development, and exploit automation. - Strong grasp of common enterprise stacks (OAuth, SAML, REST, SQL, cloud IAM). - Passionate about AI safety and eager to keep cutting-edge models from becoming attack surfaces. 💸 Pay: $40–90/hr 🌍 Remote ⏱ Flexible hours (project-based, async)