
EnCharge AI is pioneering a new era of AI computation with its transformative, efficient, and sustainable in-memory computing technology. Addressing the limitations of traditional GPUs and digital AI accelerators, EnCharge AI offers up to 20x higher efficiency, 9x higher compute density, and a 10x lower total cost of ownership, resulting in 100x lower CO2 emissions compared to cloud alternatives. Their technology integrates into various form factors, including chiplets, ASICs, and PCIe cards, enabling seamless edge-to-cloud AI deployments. EnCharge AI's mission is to democratize advanced AI, making it accessible for businesses of all sizes by enabling on-device processing for enhanced data privacy, security, and affordability. The company was founded in 2022 and is led by a team of industry veterans with deep expertise in semiconductor design and AI systems.

EnCharge AI is pioneering a new era of AI computation with its transformative, efficient, and sustainable in-memory computing technology. Addressing the limitations of traditional GPUs and digital AI accelerators, EnCharge AI offers up to 20x higher efficiency, 9x higher compute density, and a 10x lower total cost of ownership, resulting in 100x lower CO2 emissions compared to cloud alternatives. Their technology integrates into various form factors, including chiplets, ASICs, and PCIe cards, enabling seamless edge-to-cloud AI deployments. EnCharge AI's mission is to democratize advanced AI, making it accessible for businesses of all sizes by enabling on-device processing for enhanced data privacy, security, and affordability. The company was founded in 2022 and is led by a team of industry veterans with deep expertise in semiconductor design and AI systems.
Founded: 2022
Headquarters: Santa Clara
Product: Analog in-memory AI accelerators (chiplets/ASICs/PCIe) and software
Key claim: Up to 20x efficiency, 9x compute density vs conventional digital accelerators
Total funding (reported): ≈$162.9M
Notable investors: Tiger Global, Anzu Partners, Samsung Ventures
High-cost, energy-intensive AI computation on conventional digital accelerators; need for efficient edge-to-cloud AI compute.
2022
Data and Analytics
21.7M USD
Launch from stealth with Series A to commercialize in-memory computing hardware and software.
22.6M USD
Reported round to commercialize AI-accelerating chips; brought total raised to ~$45M at the time.
>100M USD
Series B led by Tiger Global with participation from multiple financial and strategic investors.
“Participation from strategic and large financial investors (e.g., Tiger Global, Samsung Ventures) alongside returning venture investors”
| Company |
|---|
Senior Director of Compiler Engineering
Bangalore
We are seeking a Director of Compiler Engineering to lead the design, development, and delivery of the next-generation AI compiler and software stack powering our custom inference accelerators. This role combines deep compiler expertise, hands-on technical leadership, and organizational ownership across compiler, runtime, and QA teams to enable cutting-edge model deployment and performance.
Key Responsibilities
Qualifications
Contact:
Uday
Mulya Technologies
muday_bhaskar@yahoo.com
"Mining The Knowledge Community"