
EnCharge AI provides efficient, low‑power AI computing solutions that let businesses run advanced models from edge devices to the cloud. The company designs analog in‑memory computing chips and pairs them with hardware and software systems to accelerate dense matrix operations and reduce compute, power, and space requirements. Its product stack targets edge-to-cloud deployments and power‑constrained applications, integrating with existing AI workflows and system software for inference and model deployment. Founded in 2022 by semiconductor and AI systems veterans, EnCharge reports large-scale production experience, patents, and sustainability-focused efficiency gains.

EnCharge AI provides efficient, low‑power AI computing solutions that let businesses run advanced models from edge devices to the cloud. The company designs analog in‑memory computing chips and pairs them with hardware and software systems to accelerate dense matrix operations and reduce compute, power, and space requirements. Its product stack targets edge-to-cloud deployments and power‑constrained applications, integrating with existing AI workflows and system software for inference and model deployment. Founded in 2022 by semiconductor and AI systems veterans, EnCharge reports large-scale production experience, patents, and sustainability-focused efficiency gains.
Founded: 2022
Headquarters: Santa Clara, California
Tech focus: Analog in-memory AI accelerators and software for edge-to-cloud
Recent funding: Series B > $100M (announced Feb 2025)
| Company |
|---|
Energy- and space-constrained AI inference acceleration for edge-to-cloud deployments.
2022
Data and Analytics
21700000
Emergence from stealth with $21.7M
22600000
Raised $22.6M to commercialize chips
100000000+
Series B announced as more than $100M
“Includes both financial and strategic investors such as Tiger Global, Samsung Ventures, CTBC/HH-CTBC, AlleyCorp, and others”
Senior Director of Compiler Engineering
Bangalore
We are seeking a Director of Compiler Engineering to lead the design, development, and delivery of the next-generation AI compiler and software stack powering our custom inference accelerators. This role combines deep compiler expertise, hands-on technical leadership, and organizational ownership across compiler, runtime, and QA teams to enable cutting-edge model deployment and performance.
Key Responsibilities
Qualifications
Contact:
Uday
Mulya Technologies
muday_bhaskar@yahoo.com
"Mining The Knowledge Community"