AI Compiler and Performance Engineer | ElastixAI · Teeming.ai
ElastixAI
ElastixAI Inc. is developing a next-generation AI inference platform that offers breakthrough total cost of ownership (TCO) per token and dynamically adapts to any deployment. The platform is designed to constantly evolve to power next-generation AI use cases, optimizing how large language models are run. ElastixAI's business model and scale are not explicitly detailed, but the company focuses on delivering advanced AI infrastructure solutions for efficient AI inference.
ElastixAI Inc. is developing a next-generation AI inference platform that offers breakthrough total cost of ownership (TCO) per token and dynamically adapts to any deployment. The platform is designed to constantly evolve to power next-generation AI use cases, optimizing how large language models are run. ElastixAI's business model and scale are not explicitly detailed, but the company focuses on delivering advanced AI infrastructure solutions for efficient AI inference.
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
Frontend Developer
InternshipTel Aviv
Internship • Tel Aviv
AI Researcher
Full-timeNew York, US
Full-time • New York, US
Machine Learning Engineer
Full-timeBerlin, DE
Full-time • Berlin, DE
Backend Developer
ContractCambridge, GB
Contract • Cambridge, GB
Technical Writer
Full-timeNew York, US
Full-time • New York, US
Machine Learning Engineer
Full-timeBelgrade, RS
Full-time • Belgrade, RS
Location:
Seattle, WA (Hybrid - 3 days/week in office)
About ElastixAI
ElastixAI is an early-stage startup on a mission to reinvent AI inference infrastructure from the ground up. We’re building a next-generation inference platform that delivers unprecedented efficiency by tightly integrating machine learning, software stack, and custom hardware. Our philosophy is simple: the best performance comes from holistic co-design, where every layer, from model architecture to kernels to silicon, works in harmony. If you’re excited about pushing AI performance to physical limits, and about shaping the future of large-scale inference, we’d love to meet you.
Role Summary
We are looking for a deeply technical AI Compiler & Performance Engineer who thrives at the intersection of ML, compilers, and hardware. In this role, you will design how LLM operations decompose into highly efficient proprietary kernel primitives, optimize execution pipelines, and co-develop abstractions with our hardware and ML teams. This is not a “typical” compiler role, it’s a chance to rethink the entire AI compute stack. At ElastixAI, if improving inference efficiency requires inventing new quantization schemes, rethinking graph-level optimizations, or modifying the hardware ISA, we do it. You’ll have end-to-end ownership to explore radically new ideas and make them real.
Key Responsibilities
Break down LLM and transformer workloads into fine-grained primitives tailored to our proprietary compute hardware.
Design and implement IR transformations, graph optimizations, kernel lowering, and code generation for novel hardware architectures.
Collaborate with ML researchers to co-design algorithmic optimizations that yield real end-to-end performance gains.
Work closely with hardware architects to refine microarchitectural features, instruction sets, memory hierarchies, and execution models.
Build performance models, profiling tools, and benchmarking frameworks to identify bottlenecks and guide design decisions.
Prototype and validate improvements across the entire stack — from PyTorch/XLA-level passes to custom kernel implementations.
Required Qualifications
Preferred / Bonus
PhD in Computer Science, Software Engineering, or a related field.
Experience with custom hardware accelerators for ML inference.
Contributions to open-source compiler or ML systems projects.
Prior startup experience or background building first-generation systems.
What We Offer:
A chance to be a foundational engineer in an innovative AI startup
A dynamic and collaborative work environment and the change to have a significant impact on new technology
The opportunity to work on challenging problems at the intersection of ML, software, and systems.
Salary
$130,000 - $250,000 per year
Contribute to shaping the overall system architecture of a first-of-its-kind inference engine.
BS/MS/PhD in Computer Science, Software Engineering, or a related field.
Deep experience building compilers, optimizing kernels, or working with ML frameworks at a systems level.
Strong proficiency in one or more programming languages such as Python and C++.
Strong understanding of one or more of the following:
LLM architectures and transformer internals
MLIR, LLVM, XLA, TVM, Triton, or similar compiler infrastructures
GPU/TPU/FPGA/ASIC compute models, memory hierarchies, and parallel execution
Quantization, sparsity, or algorithmic optimization for deep learning
Deep expertise on ML frameworks (e.g., PyTorch, TensorFlow, JAX) and understanding of ML model deployment challenges.
Solid understanding of software engineering best practices, including data structures, algorithms, and testing.
Thinking in terms of latency, cycles, memory bandwidth, and arithmetic intensity, not just algorithms.
Excellent problem-solving abilities and a knack for tackling complex technical challenges.
Excited to collaborate across ML, hardware, and software boundaries to invent something fundamentally new.
Strong communication skills and a proven ability to collaborate effectively in a cross-functional team environment.
Ability to thrive in a fast-paced, dynamic startup environment.
Competitive compensation and startup equity package
Comprehensive medical, dental, and vision coverage (100% paid by employer)