
EnCharge AI is pioneering a new era of AI computation with its transformative, efficient, and sustainable in-memory computing technology. Addressing the limitations of traditional GPUs and digital AI accelerators, EnCharge AI offers up to 20x higher efficiency, 9x higher compute density, and a 10x lower total cost of ownership, resulting in 100x lower CO2 emissions compared to cloud alternatives. Their technology integrates into various form factors, including chiplets, ASICs, and PCIe cards, enabling seamless edge-to-cloud AI deployments. EnCharge AI's mission is to democratize advanced AI, making it accessible for businesses of all sizes by enabling on-device processing for enhanced data privacy, security, and affordability. The company was founded in 2022 and is led by a team of industry veterans with deep expertise in semiconductor design and AI systems.

EnCharge AI is pioneering a new era of AI computation with its transformative, efficient, and sustainable in-memory computing technology. Addressing the limitations of traditional GPUs and digital AI accelerators, EnCharge AI offers up to 20x higher efficiency, 9x higher compute density, and a 10x lower total cost of ownership, resulting in 100x lower CO2 emissions compared to cloud alternatives. Their technology integrates into various form factors, including chiplets, ASICs, and PCIe cards, enabling seamless edge-to-cloud AI deployments. EnCharge AI's mission is to democratize advanced AI, making it accessible for businesses of all sizes by enabling on-device processing for enhanced data privacy, security, and affordability. The company was founded in 2022 and is led by a team of industry veterans with deep expertise in semiconductor design and AI systems.
Founded: 2022
Headquarters: Santa Clara
Product: Analog in-memory AI accelerators (chiplets/ASICs/PCIe) and software
Key claim: Up to 20x efficiency, 9x compute density vs conventional digital accelerators
Total funding (reported): ≈$162.9M
Notable investors: Tiger Global, Anzu Partners, Samsung Ventures
High-cost, energy-intensive AI computation on conventional digital accelerators; need for efficient edge-to-cloud AI compute.
2022
Data and Analytics
21.7M USD
Launch from stealth with Series A to commercialize in-memory computing hardware and software.
22.6M USD
Reported round to commercialize AI-accelerating chips; brought total raised to ~$45M at the time.
>100M USD
Series B led by Tiger Global with participation from multiple financial and strategic investors.
“Participation from strategic and large financial investors (e.g., Tiger Global, Samsung Ventures) alongside returning venture investors”
| Company |
|---|
SoC Architect – Chiplet-Based Systems
Locations: Bangalore / Remote (Any where in India )
Job Description:
SoC Architect – Chiplet-Based Systems
Job Description: Join us as a SoC Architect focusing on chiplet-based AI systems.
You will help define and drive the architecture of modular compute platforms using chiplet integration.
This role involves working closely with packaging, PHY, and interconnect experts to define die-to-die interfaces (e.g., UCIe, BoW, or custom links) and orchestrating integration across logic, memory, and I/O chiplets.
You’ll also own subsystem architecture for PCIe, RISC-V clusters, and memory hierarchies, while ensuring coherence and latency/power-optimized communication between disaggregated components.
Responsibilities: • Define the SoC architecture for chiplet-based AI inference platforms, including inter-chiplet data paths, protocols, and synchronization strategies. • Drive partitioning decisions between compute, I/O, memory, and control chiplets.
• Architect PCIe and DMA interfaces that interact with host systems and bridge to chiplet domains. • Specify die-to-die interconnect requirements (e.g., bandwidth, latency, power) and collaborate with packaging and PHY teams.
• Integrate and verify third-party IPs for I/O, memory, and inter-chip communication.
• Support bring-up and debug of multi-chip systems.
Required Background: • BS/MS/Ph.D. in EE or CS with 10-25+ years of SoC or multi-die system experience.
• Hands-on experience in chiplet-based design, including familiarity with UCIe, EMIB, Foveros, or similar packaging technologies.
• Strong understanding of modular SoC partitioning and die-to-die interconnect architectures.
• Experience in PCIe Gen 4/5, RISC-V subsystems, and high-performance memory interfaces (LPDDR4/5, HBM).
• Familiarity with chiplet-aware system bring-up and verification methodologies.
• SystemVerilog/UVM experience and knowledge of system-level test/debug strategies.
Contact:
Uday
Mulya Technologies
muday_bhaskar@yahoo.com
"Mining The Knowledge Community"