
EnCharge AI provides efficient, low‑power AI computing solutions that let businesses run advanced models from edge devices to the cloud. The company designs analog in‑memory computing chips and pairs them with hardware and software systems to accelerate dense matrix operations and reduce compute, power, and space requirements. Its product stack targets edge-to-cloud deployments and power‑constrained applications, integrating with existing AI workflows and system software for inference and model deployment. Founded in 2022 by semiconductor and AI systems veterans, EnCharge reports large-scale production experience, patents, and sustainability-focused efficiency gains.

EnCharge AI provides efficient, low‑power AI computing solutions that let businesses run advanced models from edge devices to the cloud. The company designs analog in‑memory computing chips and pairs them with hardware and software systems to accelerate dense matrix operations and reduce compute, power, and space requirements. Its product stack targets edge-to-cloud deployments and power‑constrained applications, integrating with existing AI workflows and system software for inference and model deployment. Founded in 2022 by semiconductor and AI systems veterans, EnCharge reports large-scale production experience, patents, and sustainability-focused efficiency gains.
Founded: 2022
Headquarters: Santa Clara, California
Tech focus: Analog in-memory AI accelerators and software for edge-to-cloud
Recent funding: Series B > $100M (announced Feb 2025)
Energy- and space-constrained AI inference acceleration for edge-to-cloud deployments.
2022
Data and Analytics
21700000
Emergence from stealth with $21.7M
22600000
Raised $22.6M to commercialize chips
100000000+
Series B announced as more than $100M
“Includes both financial and strategic investors such as Tiger Global, Samsung Ventures, CTBC/HH-CTBC, AlleyCorp, and others”
| Company |
|---|
SoC Architect – Chiplet-Based Systems
Locations: Bangalore /Remote ( any where India )
Job Description:
SoC Architect – Chiplet-Based Systems
Job Description: Join us as a SoC Architect focusing on chiplet-based AI systems.
You will help define and drive the architecture of modular compute platforms using chiplet integration.
This role involves working closely with packaging, PHY, and interconnect experts to define die-to-die interfaces (e.g., UCIe, BoW, or custom links) and orchestrating integration across logic, memory, and I/O chiplets.
You’ll also own subsystem architecture for PCIe, RISC-V clusters, and memory hierarchies, while ensuring coherence and latency/power-optimized communication between disaggregated components.
Responsibilities: • Define the SoC architecture for chiplet-based AI inference platforms, including inter-chiplet data paths, protocols, and synchronization strategies. • Drive partitioning decisions between compute, I/O, memory, and control chiplets.
• Architect PCIe and DMA interfaces that interact with host systems and bridge to chiplet domains. • Specify die-to-die interconnect requirements (e.g., bandwidth, latency, power) and collaborate with packaging and PHY teams.
• Integrate and verify third-party IPs for I/O, memory, and inter-chip communication.
• Support bring-up and debug of multi-chip systems.
Required Background: • BS/MS/Ph.D. in EE or CS with 10-25+ years of SoC or multi-die system experience.
• Hands-on experience in chiplet-based design, including familiarity with UCIe, EMIB, Foveros, or similar packaging technologies.
• Strong understanding of modular SoC partitioning and die-to-die interconnect architectures.
• Experience in PCIe Gen 4/5, RISC-V subsystems, and high-performance memory interfaces (LPDDR4/5, HBM).
• Familiarity with chiplet-aware system bring-up and verification methodologies.
• SystemVerilog/UVM experience and knowledge of system-level test/debug strategies.
Contact:
Uday
Mulya Technologies
muday_bhaskar@yahoo.com
"Mining The Knowledge Community"