To make AI inference commercially viable, d-Matrix has built a new computing platform from the ground up: Corsair™, the world’s most efficient compute solution for AI inference at datacenter scale.
We are redefining Performance and Efficiency for AI Inference at scale.
To make AI inference commercially viable, d-Matrix has built a new computing platform from the ground up: Corsair™, the world’s most efficient compute solution for AI inference at datacenter scale.
We are redefining Performance and Efficiency for AI Inference at scale.
Total funding (company-stated): ≈ $450M (company-stated)
Employee count (snapshot): 265
Company Overview
Problem Domain
AI inference acceleration for datacenter-scale generative AI
Founded
2019
Industry
Semiconductor Manufacturing
Funding Track Record
Series B- 2023-09-06
110000000
Series C- 2025-11-12
275000000
Company-stated valuation $2 billion; company-stated total funding to date $450 million
Investor Signal
“Participation from strategic and financial investors including Temasek, Playground Global, M12, Nautilus Venture Partners, Industry Ventures, Mirae Asset”
Founders
What we do
Related Companies
Company
HQ
Industry
Total Funding
Modular
🇺🇸US
Data and AnalyticsDeepTechInformation TechnologySoftware
$380M
FriendliAI
🇺🇸US
Data and AnalyticsDeepTechInformation TechnologyInternet ServicesSoftware
-
Fireworks AI
🇺🇸US
Data and AnalyticsDeepTechInformation TechnologySoftware
$307M
Baseten
🇺🇸US
—
$585M
Cornelis Networks
🇺🇸US
Data and AnalyticsDeepTechInformation TechnologySoftware
$20M
Join the Team
Senior Director of Software Engineering
On-SiteSan Francisco Bay Area, US
On-Site • San Francisco Bay Area, US
Who you are
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
Data Scientist
ContractRotterdam, NL
Contract • Rotterdam, NL
Backend Developer
ContractTel Aviv
Contract • Tel Aviv
Frontend Developer
Part-timeBerlin, DE
Part-time • Berlin, DE
AI Researcher
Part-timeMunich, DE
Part-time • Munich, DE
Backend Developer
InternshipLondon, GB
Internship • London, GB
Data Scientist
Part-timeTel Aviv
Part-time • Tel Aviv
MS or PhD in Computer Engineering, Math, Physics or related degree with 12+ years of industry experience
Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals
Proficient in C/C++ and Python development in Linux environment and using standard development tools
Experience implementing algorithms in high level languages such as C/C++, Python
Experience implementing algorithms for specialized hardware such as FPGAs, DSPs, GPUs, AI accelerators using libraries such as CuDA etc
Experience in implementing operators commonly used in ML workloads - GEMMs, Convolutions, BLAS, SIMD operators for operations like softmax, layer normalization, pooling etc
Experience with development for embedded SIMD vector processors such as Tensilica
Self-motivated team player with a strong sense of ownership and leadership
Prior startup, small team or incubation experience
Experience with ML frameworks such as TensorFlow and.or PyTorch
Experience working with ML compilers and algorithms, such as MLIR, LLVM, TVM, Glow, etc
Experience with a deep learning framework (such as PyTorch, Tensorflow) and ML models for CV, NLP, or Recommendation
Work experience at a cloud provider or AI compute / sub-system company
What the job involves
The role requires you to be part of the team that helps productize the SW stack for our AI compute engine
As part of the Software team, you will be responsible for the development, enhancement, and maintenance of software kernels for next-generation AI hardware
You possess experience building software kernels for HW architectures
You possess a very strong understanding of various hardware architectures and how to map algorithms to the architecture
You understand how to map computational graphs generated by AI frameworks to the underlying architecture
You have had past experience working across all aspects of the full stack tool chain and understand the nuances of what it takes to optimize and trade-off various aspects of hardware-software co-design
You are able to build and scale software deliverables in a tight development window
You will work with a team of compiler experts to build out the compiler infrastructure working closely with other software (ML, Systems) and hardware (mixed signal, DSP, CPU) experts in the company