
AI-native security for agentic applications, built from the ground up with fine-tuned models to protect and secure every layer of your agentic AI stack. Not just the prompt layer.

AI-native security for agentic applications, built from the ground up with fine-tuned models to protect and secure every layer of your agentic AI stack. Not just the prompt layer.
What they do: AI-native security for agentic AI applications (continuous red-teaming and real-time guardrails)
Founded / HQ: Founded by AI and cybersecurity veterans; headquartered in Sunnyvale, CA
Flagship products: Ascend AI (automated continuous red teaming) and Defend AI (real-time runtime guardrails)
Funding: $21M Series A announced Mar 27, 2025 (Lightspeed Venture Partners, Bain Capital Ventures)
Employee count: Approximately 38
| Company |
|---|
Security for agentic AI applications and chatbots (runtime safety, prompt-injection mitigation, automated red-teaming).
Cybersecurity / AI security
$21M
Announced launch from stealth with this round
“Top-tier venture backing (Lightspeed Venture Partners and Bain Capital Ventures)”
Your talent is powerful, protect it wisely with us!
AI is the biggest technology shift of our lifetime as it reshapes how people work and play! At Straiker , our mission is to ensure businesses can embrace AI with confidence. Founded by seasoned AI & Security entrepreneurs and backed by premier venture partners like Lightspeed and Bain capital, we’re building highly fine tuned AI technology to protect enterprises against the next-generation of AI threats. If you are a dreamer, self-starter, high IQ - low ego individual and want to join us in our mission to secure the future with AI, Straiker is the right place for you. Are you in?
About the Job:
Job Title: AI Engineer / Sr AI Engineer
Location: SF Bay Area (Hybrid — 3 days onsite at HQ). Willing to pay for relocation.
Job Type: Full-Time
As an AI Engineer at Straiker, you will play a key role in developing, fine-tuning, and deploying advanced AI models. This includes fine tuning LLMs to provide guard rails and other mechanisms that enable safety and security. You may also assist and contribute to adversarial LLM red teaming, developing tools and procedures as well as using AI to find security and safety vulnerabilities in GenAI applications. The ideal candidate has a strong background in machine learning, deep learning, and natural language processing, with hands-on experience in fine-tuning large models for various applications.
Key Responsibilities:
Qualifications:
Preferred Qualifications: