Partner Details
EdgeCortix
Unosawa Tokyu Building 3F, 1-19-15 Ebisu, Shibuya, Tokyo, Japan, 1500013
Phone+3-64417-9661
https://www.edgecortix.com/
Partner Overview
EdgeCortix is a next generation edge-AI focused fabless semiconductor design company with a software first approach (i.e., we think about software robustness first, while designing the processor architectures). The company is applying environmentally friendly, superior energy-efficient yet ultra-low latency AI (neural networks) for real time CV-based intelligent applications. Delivering near cloud-level performance directly to infrastructure end-points or edge devices, where data curation and decision-making can happen together, under low power, while also drastically reducing operating costs.
Partnering with leading design and manufacturing companies, EdgeCortix is currently in production with its patented technology stack that provides its AI acceleration processor Intellectual Property (IP) and software tightly integrated with third-party hardware enabling customers to achieve low-latency and high performance for AI inference, in relatively low power arenas (up to few hundred Watts), at the device edge and servers.
Partnership Solutions
- EdgeCortix Dynamic Neural Accelerator (DNA) is:
- A flexible IP core for deep learning inference with high compute capability, ultra-low latency and scalable inference engine.
- Specially optimized for inference with streaming and high resolution data (Batch size 1).
- A patented reconfigurable IP core that, in combination with EdgeCortix’s MERA software framework, enables seamless acceleration of today’s increasingly complex and compute-intensive AI workloads, while achieving over 90% array utilization.
- DNA bitstreams for Achronix provides significantly lower inference latency on streaming data with significant performance advantage compared to competing FPGAs, and better power efficiency compared to other general-purpose processors.
- EdgeCortix MERA™ Complier Framework:
- Complementing DNA is the MERA framework that provides an integrated compilation library and runtime.
- This dedicated IP core enables software engineers to use Achronix FPGA cards as drop-in replacements for standard CPUs or GPUs, without leaving their comfort zone of standard frameworks like PyTorch and TensorFlow.
- MERA consists of the compiler and software tool-kit needed to enable deep neural network graph compilation and inference using the integrated DNA bitstreams.
- Having built-in support for the open-source Apache TVM compiler framework, MERA provides the tools, APIs, code-generator and runtime needed to deploy a pre-trained deep neural network after a simple calibration and quantization step.
- MERA supports models to be quantized directly in the deep learning framework, e.g., Pytorch or TensorflowLite.