Unlock the Full Potential of FPGAs for Real-Time ML Inference — Tune an Overlay to an Architecture

Scale solutions in the FPGA transparently and quickly

FPGAs accelerate conversational AI and Natural Language Processing applications

An FPGA can be a very attractive platform for many Machine Learning (ML) inference applications. In this presentation, using the example of automatic speech recognition (ASR), we will explore how FPGAs can be used to accelerate conversational AI and Natural Language Processing applications. We’ll review the key components in the Achronix FPGAs architecture, such as a 2D Network on Chip (NoC), high-speed external memory, and optimized Machine Learning Processor (MLP), and how our symmetrical architecture can make scaling solutions in the FPGA transparent and short time to deploy. Using standard benchmarks, we demonstrate an ASR appliance that can reduce costs by as much as 90% compared with alternative approaches.

What you’ll learn:

  • FPGA Architecture for ML Inference
  • Overlay
  • ASR Appliance Solution

 

Register to View Webimar

Country

About the Speaker(s)

Bill Jenkins

Bill Jenkins – Director AI Product Marketing at Achronix

An engineer turned product marketer, Bill is the Director of Product Marketing at Achronix. After 11 years of working at Altera/Intel PSG, he joined the company in various technical and marketing roles. He led the initial concept and rollout of PSG’s high-level synthesis HLS strategy with Open CL, developed Intel’s AI solution strategy to showcase the value of FPGAs across a wide variety of market segments, and developed AI, HPC, radar, and security solutions within the military, aerospace, and government (MAG) business unit. Bill got his BS and master's in electrical engineering and MBA from the University of Massachusetts Lowell.