EnCharge AI provides efficient, low‑power AI computing solutions that let businesses run advanced models from edge devices to the cloud. The company designs analog in‑memory computing chips and pairs them with hardware and software systems to accelerate dense matrix operations and reduce compute, power, and space requirements. Its product stack targets edge-to-cloud deployments and power‑constrained applications, integrating with existing AI workflows and system software for inference and model deployment. Founded in 2022 by semiconductor and AI systems veterans, EnCharge reports large-scale production experience, patents, and sustainability-focused efficiency gains.
AI ComputeEdge AIEfficiencyHardwareIn-Memory ComputingPerformanceSustainabilityTCOenchargeai.com
EnCharge AI
EnCharge AI provides efficient, low‑power AI computing solutions that let businesses run advanced models from edge devices to the cloud. The company designs analog in‑memory computing chips and pairs them with hardware and software systems to accelerate dense matrix operations and reduce compute, power, and space requirements. Its product stack targets edge-to-cloud deployments and power‑constrained applications, integrating with existing AI workflows and system software for inference and model deployment. Founded in 2022 by semiconductor and AI systems veterans, EnCharge reports large-scale production experience, patents, and sustainability-focused efficiency gains.
AI ComputeEdge AIEfficiencyHardwareIn-Memory ComputingPerformanceSustainabilityTCOenchargeai.com
HQSanta Clara, US
Team Size85
Open Jobs184
Total Funding$163M
Latest Fundraiselast year
TL;DR
Founded: 2022
Headquarters: Santa Clara, California
Tech focus: Analog in-memory AI accelerators and software for edge-to-cloud
Recent funding: Series B > $100M (announced Feb 2025)
Company Overview
Problem Domain
Energy- and space-constrained AI inference acceleration for edge-to-cloud deployments.
Founded
2022
Industry
Data and Analytics
Tech Stack
Analog in-memory compute chips
Hardware systems for dense matrix operations
Software stack for inference/model deployment
Funding Track Record
Seed / Series A- 2022-12-14
21700000
Emergence from stealth with $21.7M
Undisclosed round- 2023-12-05
22600000
Raised $22.6M to commercialize chips
Series B- 2025-02-13
100000000+
Series B announced as more than $100M
Investor Signal
“Includes both financial and strategic investors such as Tiger Global, Samsung Ventures, CTBC/HH-CTBC, AlleyCorp, and others”
Founders
What we do
Related Companies
Company
HQ
Industry
Total Funding
Innatera Nanosystems
🌍Rijswijk
Data and AnalyticsDeepTechHardwareManufacturingSoftware
$27M
quadric, Inc
🇺🇸Burlingame, US
Consumer ProductsDeepTechHardwareManufacturing
$74M
Fractile
🇬🇧GB
Hardware
-
Speedata.io
🌍Netanya
Data and AnalyticsDeepTechInformation Technology
$114M
d-Matrix
🇺🇸US
Data and AnalyticsDeepTechHardwareInformation TechnologyInternet ServicesManufacturingSoftware
$429M
Join the Team
Senior Director of Compiler Engineering (Bangalore )
HybridGreater Hyderabad Area, IN
Hybrid • Greater Hyderabad Area, IN
Senior Director of Compiler Engineering
Bangalore
We are seeking a Director of Compiler Engineering to lead the design, development, and delivery of the next-generation AI compiler and software stack powering our custom inference accelerators. This role combines deep compiler expertise, hands-on technical leadership, and organizational ownership across compiler, runtime, and QA teams to enable cutting-edge model deployment and performance.
Key Responsibilities
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
DevOps Engineer
Part-timeUtrecht, NL
Part-time • Utrecht, NL
Software Engineer
InternshipMunich, DE
Internship • Munich, DE
Backend Developer
ContractRotterdam, NL
Contract • Rotterdam, NL
Data Scientist
Part-timeJerusalem
Part-time • Jerusalem
Software Engineer
Part-timeLondon, GB
Part-time • London, GB
AI Researcher
Part-timeSan Francisco, US
Part-time • San Francisco, US
Technical Leadership: Define the compiler roadmap and architecture, driving innovation in graph compilation, lowering flows, and performance optimization.
New Model Enablement: Own end-to-end bring-up for new model families (e.g., Llama, MoE) from FX capture through runtime execution.
Frontend & IR Development:
Oversee development of modules such as llama.py, ensuring alignment with compiler FX expectations.
Lead FX graph transformations and lowering to EAI primitive dialect (pattern library).
Kernel & Ops Development: Direct kernel library integration, new kernel creation, and custom NPU RV ops to extend hardware capabilities.
Feature Ownership: Deliver compiler support for advanced features like device-to-device (D2D) communication and attention optimizations.
Cross-Team Collaboration: Partner with hardware, quantization, runtime teams to ensure feature completeness and performance parity across new models.
Quality & Execution:
Establish robust QA, validation, and regression testing for compiler releases.
Ensure timely, high-quality software delivery with focus on reliability and maintainability.
People Management: Build, mentor, and scale a high-performing team of compiler and software engineers.
Qualifications
15+ years in compiler or systems software development, with 8+ years leading engineering teams.
Expertise in AI/ML graph compilers (e.g., Torch-MLIR, TVM, XLA, Torch-FX).
Strong understanding of neural network architectures, quantization, and hardware acceleration.
Proven success driving complex, cross-functional software initiatives from design to release.
Experience with MLIR/LLVM, hardware-software co-design, and open-source compiler contributions.
Exceptional leadership, communication, and collaboration skills.