New-Tech Europe Magazine | Q3 2021 | Digital Edition
in potentially expensive field updates. The promise of adaptive computing Adaptive computing – encompassing hardware that can be optimized for specific applications such as Field Programmable Gate Arrays (FPGAs) – is a powerful solution for AI enabled edge applications. New adaptive hardware has also been introduced, including adaptive System-on-Chips (SoC) which contain FPGA fabric, coupledwith one or more embedded CPU subsystems. However, adaptive computing is so much more than “just hardware”. It incorporates a comprehensive set of design and runtime software that, when combined, delivers a unique adaptive platform from which highly flexible, yet efficient systems can be built. Adaptive computing allows DSAs to be implemented without the design time and upfront cost needed when dealing with custom silicon devices, ASICs for example. This means flexible and optimized solutions can be deployed rapidly for any given domain, including AI-enabled edge applications. Adaptive SoCs are ideal for such domain-specific processing because they combine the flexibility of a comprehensive, embedded CPU subsystem with the optimal data processing of adaptive hardware. System-on-Modules (SOMs) provide a complete, production-ready computing platform, and save significant development time and cost vs. chip down development. SOMs can plug into a larger edge application, providing both the flexibility of a custom implementation with the ease-of-use and reduced time to market of an off-the-shelf Introducing Adaptive System-on-Modules
Figure 1: Kria K26 SOM
solution. These benefits make SOMs an ideal platform for edge AI applications. However, to achieve the performance required by modern AI- enabled applications, acceleration is needed. Some applications require custom hardware components to interface with an adaptive SoC, meaning chip-down design is needed. However, an increasing number of AI-enabled edge applications need similar hardware components and interfaces, even for vastly different end applications. As industries have moved towards standardized interface and communications protocols, the same set of components are suitable for a variety of applications, despite having vastly different processing needs. An adaptive SOM for an AI-enabled edge application incorporates an adaptive SoC with industry-standard interfaces and components, allowing developers with limited or even no hardware experience to benefit from adaptive computing technology. An adaptive SoC can implement both the AI and non-AI processing, hence the whole application. Additionally, an adaptive SoC on an adaptive SOM enables a high degree
of customization.). It is designed to be integrated into larger systems and uses a predefined form factor. Adaptive SOMs make it possible to take full advantage of adaptive computing without having to do chip-down design. An adaptive SOM is just part of the solution. The software is also key. Companies that adopt adaptive SOMs benefit from a unique combination of performance, flexibility, and rapid development time. They can enjoy the benefits of adaptive computing without the need to build their own circuit boards, something that has only recently been possible at the edge with the introduction of Xilinx’s Kria™ portfolio of adaptive SOMs. The Kria K26 SOM is built on top of the Zynq® UltraScale+™ MPSoC architecture, which features a quad- core Arm® Cortex™-A53 processor, more than 250 thousand logic cells, and a H.264/265 video codec. The SOM also features 4GB of DDR4 memory and 69 3.3V I/Os & 116 1.8V I/Os, which allow it to adapt to virtually any sensor or interface. With 1.4 tera-ops of AI compute, the Kria K26 SOM enables developers to create vision AI applications offering more than 3X higher performance at
New-Tech Magazine Europe l 17
Made with FlippingBook - Online Brochure Maker