У оквиру семинара Центра за изучавање комплексних система Института за физику у Београду, у четвртак, 11. фебруара 2021. године у 14 часова путем Zoom платформе, др Владимир Лончар (Лабораторија за примену рачунара у науци, Институт за физику у Београду) одржаће предавање:
hls4ml: Fast inference of deep neural networks in FPGAs
With edge computing, real-time inference of deep neural networks (DNNs) on custom hardware has become increasingly relevant. Smartphone companies are incorporating Artificial Intelligence (AI) chips in their design for on-device inference to improve user experience and tighten data security, and the autonomous vehicle industry is turning to application-specific integrated circuits (ASICs) to keep the latency low. While the typical acceptable latency for real-time inference in applications like those above is O(1) ms, other applications require sub-microsecond inference. For instance, high-frequency trading machine learning (ML) algorithms are running on field-programmable gate arrays (FPGAs), highly accurate devices, to make decisions within nanoseconds. At the extreme inference spectrum end of both the low-latency (as in high-frequency trading) and limited-area (as in smartphone applications) is the processing of data from proton-proton collisions at the Large Hadron Collider (LHC) at CERN. Here, latencies of O(1) microsecond are required and resources are strictly limited. To address these challenges, we have developed hls4ml, an open-source library that converts pre-trained ML models into FPGA firmware, targeting extreme low-latency inference in order to stay within the strict constraints imposed by the CERN particle detectors.
In this talk, we will describe the essential features of the hls4ml workflow and network optimization techniques, including how to reduce the footprint of a machine learning model using state-of-the art techniques such as model pruning and quantization through quantization aware training.
Meeting ID: 857 9507 6891