
Edged AI designs and provides tensor processing cores (TPUs) that accelerate neural networks for edge AI applications. Their unique architectural approach uses a single, fully parametric computation block with integrated matrix and vector operations, offering adaptability to various performance and power requirements. Edged AI's IP is silicon-tested and designed for low-latency inference, making it suitable for demanding edge applications like ADAS and industrial security. Their solutions boast high performance (up to 50 trillion operations per second) and energy efficiency (2 TOPS per Watt), with seamless compatibility with frameworks like TensorFlow and Keras. The company offers the EDGED TPU IP Core for customization, a reference design chip (Chip H) for high-performance inference, and another (Chip E) for embedded systems. They emphasize a no-tradeoff approach between responsiveness and performance, enabling high efficiency in restricted power environments.

Edged AI designs and provides tensor processing cores (TPUs) that accelerate neural networks for edge AI applications. Their unique architectural approach uses a single, fully parametric computation block with integrated matrix and vector operations, offering adaptability to various performance and power requirements. Edged AI's IP is silicon-tested and designed for low-latency inference, making it suitable for demanding edge applications like ADAS and industrial security. Their solutions boast high performance (up to 50 trillion operations per second) and energy efficiency (2 TOPS per Watt), with seamless compatibility with frameworks like TensorFlow and Keras. The company offers the EDGED TPU IP Core for customization, a reference design chip (Chip H) for high-performance inference, and another (Chip E) for embedded systems. They emphasize a no-tradeoff approach between responsiveness and performance, enabling high efficiency in restricted power environments.