What they do: Provide a production-ready memory layer that compresses and indexes conversational facts for LLM applications
Location & size: San Francisco; ~10 employees
Compliance & deployment: Advertises SOC 2 and HIPAA compliance with deploy-anywhere options
Funding signal: Raised $24M across Seed and Series A (Series A led by Basis Set Ventures)
AI infrastructure / memory for large language model applications
Artificial intelligence / AI infrastructure
Pre-seed recorded on company profiles
Series A recorded on company profiles; reporting combines seed + Series A into $24M total
“Seed led by Kindred Ventures; Series A led by Basis Set Ventures with participation from Peak XV Partners, GitHub Fund, and Y Combinator; additional angels listed”
| Company |
|---|
Role Summary:
Own the end-to-end lifecycle of memory features—from research to production. You’ll fine-tune models for extraction, updates, consolidation/forgetting, and conflict resolution; turn customer pain points into research hypotheses; implement and benchmark ideas from papers; and ship with Engineering to SOTA latency, reliability, and cost. You’ll also build evaluation at scale (offline metrics + online A/Bs) and close the loop with real-world feedback to continuously improve quality.
What You'll Do:
Minimum Qualifications
Nice to Have:
Compensation Range: $175K - $250K