
Industry leaders like Etsy, Robinhood, and Stripe trust Assembled to provide customer-facing AI agents and workforce planning at scale. We automatically resolve millions of interactions through chat, email, and phone while optimizing staffing for hundreds of thousands of support professionals. Our mission is to elevate customer support through AI-powered software that makes life easier for customers and employees.

Industry leaders like Etsy, Robinhood, and Stripe trust Assembled to provide customer-facing AI agents and workforce planning at scale. We automatically resolve millions of interactions through chat, email, and phone while optimizing staffing for hundreds of thousands of support professionals. Our mission is to elevate customer support through AI-powered software that makes life easier for customers and employees.
Founded: 2018
Headquarters: San Francisco, CA
Product: Workforce management and AI-driven customer support platform
Notable customers: Etsy, Robinhood, Stripe
Latest disclosed round: Series B $51M (May 2022)
Total disclosed funding: $70.7M (USD)
Customer support operations and workforce management
2018
Software Development
$3.1M
$16.6M
$51M
“Backed by strategic SaaS and VC investors including Stripe, Emergence Capital, Basis Set Ventures, SignalFire, Felicis, and NEA”
| Company |
|---|
Assembled builds the infrastructure that underpins exceptional customer support, empowering companies like CashApp, Etsy, and Robinhood to deliver faster, better service at scale. With solutions for workforce management, BPO collaboration, and AI-powered issue resolution, Assembled simplifies the complexities of modern support operations by uniting in-house, outsourced, and AI-powered agents in a single operating system. Backed by $70M in funding from NEA, Emergence Capital, and Stripe, and driven by a team of experts passionate about problem-solving, we’re at the forefront of support operations technology. We’re looking for a software engineer to join our Infrastructure team—building and operating the core systems that power , our rapidly growing AI agent platform for customer support. Assist automates support workflows across email, chat, and voice, and has grown from $0 to $1M in ARR in just 3 months. As adoption accelerates, we’re investing deeply in scaling its infrastructure to meet increasing demand and security expectations from enterprise customers. As part of the AI Infrastructure team, you’ll be responsible for the systems that enable Assist to be fast, reliable, and secure. You’ll work on foundational platform components that power real-time LLM usage at scale, while also exploring how AI can be leveraged internally to make our engineering team more productive. This team is highly cross-functional, working closely with the AI, security, and product engineering teams. This is a high-ownership role for someone who’s excited by 0-to-1 building and shaping the infrastructure backbone of our AI products. * : We manage and scale the infrastructure that serves LLM-powered agents across chat, email, and voice. This includes selecting inference strategies, integrating with model providers (e.g. OpenAI, Anthropic), and dynamically routing traffic for performance and cost efficiency. * : Assist relies heavily on dynamically generated prompts and semantic search across support content. The team owns highly-available, fast-access storage and indexing layers optimized for real-time AI interactions. * : Enterprises expect strict guardrails around AI use. We’re building systems like network-level intrusion detection (IDS/IPS), audit logging, and LLM usage policy enforcement to meet these expectations and unlock new sales channels. * : We operate systems that surface key metrics—token usage, latency, cost per response, and quality signals—so the Assist team can continuously improve Assist’s performance and accuracy. * : We are beginning to explore and evangelize the use of AI to accelerate internal engineering workflows—through internal chat agents, pair programming tools, and intelligent automation for deployment, debugging, and on-call. Our goal is to empower engineers across the company to build faster and more confidently with AI. * Have 6+ years of engineering experience, with past ownership of high-scale, production-critical infrastructure * Have experience with distributed systems and container orchestration (especially Kubernetes) * Have worked with AI/ML platforms or are excited to build foundational infrastructure for LLM-based applications * Thrive in fast-paced environments with shifting requirements and ambiguous problem spaces * Are motivated by impact, enjoy deep technical challenges, and want to work cross-functionally across security, AI, and product * Have strong familiarity with one or more parts of our tech stack: * : AWS * : Kubernetes + Karpenter * : Experience with OpenAI, Anthropic, or open-source model serving (e.g., vLLM, HuggingFace TGI, Ray Serve) * : Vector databases (e.g., Pinecone, Weaviate, PGVector), semantic search, prompt templating systems * : Postgres + PgBouncer, Snowflake, Redis * : Go and Python * : Datadog, Mezmo, CloudWatch, Buildkite, CircleCI The estimated salary range for this role is $135,000-$280,000 per year. The base pay offered may vary depending on location, job-related knowledge, skills, and experience. Stock options are provided as part of the compensation package, in addition to a full range of medical, financial, and/or other benefits, dependent on the position offered. * Generous medical, dental, and vision benefits * Paid company holidays, sick time, and unlimited time off * Monthly credits to spend on each: professional development, general wellness, Assembled customers, and commuting * Paid parental leave * Hybrid work model with catered lunches everyday (M-F), snacks, and beverages in our SF & NY offices * 401(k) plan enrollment