System architects and program managers worry about how to mitigate risk when developing an SoC. Risk can present itself in many forms, but it is most commonly the result of ill-defined and changing standards as well as the need to future-proof designs.
The Programmable Logic Approach
Relegating at-risk portions of the design to programmable logic is the traditional approach taken by system architects. However, a standalone FPGA cannot always deliver the needed performance or meet system power and cost goals. Enter the embedded FPGA (eFPGA), the SoC designer’s secret weapon to meet design goals while mitigating risk. Compared to using a standalone FPGA, adding an eFPGA to an SoC achieves a more flexible design, lower power, higher performance and lower overall system costs.
The Advantages of an eFPGA
Advantages of selecting an eFPGA over an FPGA are numerous. First, eFPGAs deliver smaller die areas when compared to standalone FPGAs because the entire I/O functionality that allows chip-to-chip connectivity on a PCB is eliminated and the embedded fabric is sized specifically to application requirements. Because the die area for an eFPGA is minimized, additional cost to the SoC is small.
By foregoing a standalone FPGA and embedding programmable logic functionality as a personalized combination of look-up tables, memory, and DSP blocks, eFPGAs offer fundamental improvements in signal delays, bandwidth, latency, power, and cost. Board design becomes easier while power and cooling requirements are reduced, and system reliability improves. The system BoM is improved from both a cost and component count standpoint, as the discrete FPGA along with all its supporting devices – including level shifters, voltage regulators, and bypass capacitors – are eliminated, and there is significant savings in PCB real estate.
In many instances, the system architect will define his or her own custom block functions to be included in the eFPGA along with the standard logic, embedded memory, and DSP blocks. These customized blocks are integrated into the logic fabric alongside the traditional building blocks of LUTs, RAMs, and DSPs, increasing the capability of the eFPGA by adding functions optimized to decrease area and/or increase performance of targeted applications.
In artificial intelligence (AI) applications where there is a considerable compute workload, requirements on both training and inference side are evolving. It takes significant resources, both financial and in terms of time to market, to get a custom ASIC out in the market dedicated to a specific application. By the time the chip is out in the market, the system architect may already be looking into implementing an optimized version of the current AI algorithm that would not be possible post-tapeout in the ASIC. Traditional FPGAs, even though not ideally suited to meet tomorrow’s AI requirements, will continue to fill that gap with increased flexibility and programmability compared to ASICs.
Speedcore eFPGAs are a Winning Solution
System architects agree that eFPGA integration is a winning proposition that can make an SoC or ASIC adaptable to a wide range of high-performance compute intensive applications ranging from AI and machine learning, 5G wireless, data center, automotive and high-performance computing (HPC).
For more info on Speedcore eFPGA IP, visit Introducing the Speedcore eFPGA