
To make AI inference commercially viable, d-Matrix has built a new computing platform from the ground up: Corsair™, the world’s most efficient compute solution for AI inference at datacenter scale. We are redefining Performance and Efficiency for AI Inference at scale.

To make AI inference commercially viable, d-Matrix has built a new computing platform from the ground up: Corsair™, the world’s most efficient compute solution for AI inference at datacenter scale. We are redefining Performance and Efficiency for AI Inference at scale.
Headquarters: Santa Clara, California
Founded: 2019
Core product: Corsair inference accelerators (digital in-memory computing platform)
Total funding (company-stated): ≈ $450M (company-stated)
Employee count (snapshot): 265
| Company |
|---|
AI inference acceleration for datacenter-scale generative AI
2019
Semiconductor Manufacturing
110000000
275000000
Company-stated valuation $2 billion; company-stated total funding to date $450 million
“Participation from strategic and financial investors including Temasek, Playground Global, M12, Nautilus Venture Partners, Industry Ventures, Mirae Asset”
At d-Matrix , we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration. We value humility and believe in direct communication. Our team is inclusive , and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together , we can help shape the endless possibilities of AI.
We are seeking a motivated and innovative Machine Learning Intern to join our team. The intern will work on developing a dynamic Key-Value (KV) cache solution for Large Language Model (LLM) inference, aimed at enhancing memory utilization and execution efficiency on D-Matrix hardware. This project will involve modeling at the PyTorch graph level to enable efficient, torch-native support for KV-Cache, addressing limitations in current solutions.
Responsibilities
Key Objectives
Requirements
Preferred Qualifications
This role is ideal for a self-motivated intern interested in applying advanced memory management techniques in the context of large-scale machine learning inference. If you’re passionate about optimizing machine learning models and are excited to explore cutting-edge solutions in model inference, we encourage you to apply.
Equal Opportunity Employment Policy d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.
d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.