New-Tech Europe Magazine | Q3 2021 | Digital Edition

Accelerating Innovation at the Edge with Adaptive System-on-Modules

Evan Leal, Director, Product Marketing - Boards & Kits at Xilinx

AI-enabled are increasingly being deployed at the edge and endpoint. High- performance AI inference is enabling smarter cities and highly automated smart factories. Even the retail experience is becoming more sophisticated with smart retail ushering in highly sophisticated automated shopping experiences. These applications need to be extremely reliable and require high performance, while being delivered in an efficient and compact form factor. Edge processing challenges When deploying a system at the edge, power consumption, footprint, and cost are all limiting factors. Increasing processing demands, within the limitations of edge processing, mean that providing applications

the required performance level is challenging. While CPUs have experienced improvements at the edge, the gains have slowed in recent years. Unaccelerated CPUs struggle to provide the performance needed for the next generation of AI-enabled edge applications, especially when considering the tight latency requirements. A domain-specific architecture (DSA) is key when implementing advanced AI applications at the edge. DSAs also provide determinism and low latency. A suitable DSA will be designed specifically to process the required data efficiently – both the AI inference, and the non-AI parts of the application, essentially the whole application. This is important considering AI inference requires non-AI pre- and post-processing, all of which have higher performance

requirements. Fundamentally, whole application acceleration is required to implement efficient AI-enabled applications at the edge (and elsewhere). Like any fixed silicon solution, application-specific standard products (ASSP) that have been developed for AI edge applications still have limitations. The main challenge is that AI innovation is incredibly rapid, leaving AI models obsolete much quicker than non- AI technologies. Fixed silicon devices that implement AI can quickly become obsolete due to the emergence of newer, more-efficient AI models. It can take several years to tape-out a fixed silicon device by which time the state-of-the-art in AI models will have advanced. Security and functional safety requirements are also becoming more important for edge applications, often resulting

16 l New-Tech Magazine Europe

Made with FlippingBook - Online Brochure Maker