
“Applications supported by AI are increasingly being deployed to the edge and terminals. High-performance AI inference is pushing smarter cities and highly automated smart factories into reality. As smart retail introduces an extremely sophisticated automated shopping experience, the retail experience has become more sophisticated. These applications require extremely high reliability and high performance, as well as efficient and compact form factors.
“
Author: Evan Leal, Director of Product Marketing, Xilinx Circuit Boards and Kits
Applications supported by AI are increasingly being deployed to the edge and terminals. High-performance AI inference is pushing smarter cities and highly automated smart factories into reality. As smart retail introduces an extremely sophisticated automated shopping experience, the retail experience has become more sophisticated. These applications require extremely high reliability and high performance, as well as efficient and compact form factors.
Edge processing problems
When deploying a system at the edge, power consumption, board area, and cost are all constraints. Under the various constraints of edge processing, the continuous increase in processing requirements means that providing the required performance level will face greater challenges. Although CPUs have also developed edge computing, their growth rate has slowed down in recent years. When delivering the required performance for the new generation of AI-supported edge applications, the unaccelerated CPU behaves rather reluctantly, especially considering the strict latency requirements.
When implementing cutting-edge AI applications at the edge, domain-specific architecture (DSA) is the key. In addition, DSA also provides determinism and low latency.
A suitable DSA is specifically designed to efficiently process the required data, including both AI inference and non-AI part of the application, that is, the acceleration of the overall application. Considering that AI inference requires non-AI pre-processing and post-processing, these require higher performance, which is very important. Fundamentally speaking, to realize efficient applications supported by AI at the edge (and elsewhere) requires acceleration of the overall application.
Like any fixed-function chip solution, application-specific standard products (ASSP) developed for AI edge applications still have their own limitations. The main challenge is the extraordinary speed of AI innovation. Compared with non-AI technologies, AI models will become obsolete much faster. The use of fixed-function chip devices to implement AI will quickly become obsolete due to the emergence of newer and more efficient AI models. The tape-out of fixed-function chip devices will take several years, and by then the cutting-edge technology of AI models will have moved forward. In addition, for edge applications, the importance of safety and functional safety requirements is also increasing, and costly on-site updates may often be required.
The future of adaptive computing
Adaptive computing includes hardware that can be optimized for specific applications, such as a field programmable gate array (FPGA), which is a powerful solution dedicated to AI-based edge applications.
In addition, new adaptive hardware is also emerging, including an adaptive system on chip (SoC) that contains an FPGA architecture and is coupled with one or more embedded CPU subsystems. However, adaptive computing is far more than “pure hardware”. It integrates a comprehensive and comprehensive design software and runtime software. Combine them to form a unique adaptive platform on which a very flexible and efficient system can be built.
Implementing DSA with adaptive calculations can avoid the design time and upfront cost required for using custom chip devices such as ASICs. This enables rapid deployment of optimized and flexible solutions for any specific field application, including AI-based edge applications. Adaptive SoCs are ideal for special processing in this type of field, because they have the flexibility of a comprehensive embedded CPU subsystem and the excellent data processing capabilities of adaptive hardware.
Introduced adaptive modular system-SOM
The System of Modularity (SOM) provides a complete and mass-produceable computing platform. Compared with chip-down development, this method can save considerable development time and cost. SOM can be inserted into larger edge application systems, which not only provides the flexibility of customized implementation schemes, but also provides the ease of use of ready-made solutions and faster time to market. These advantages make SOM an ideal platform for edge AI applications. However, to achieve the performance required by modern AI applications, acceleration is essential.
Certain applications require custom hardware components to be connected to an adaptive SoC interface, which means chip-down design is required from the chip level. However, more and more AI-based edge applications require similar hardware components and interfaces, even when the terminal applications are very different. As companies turn to standardized interfaces and communication protocols, although the processing requirements are significantly different, the same set of components can be applied to various types of applications.
Adaptive SOM for AI-based edge applications combines adaptive SoC with industry standard interfaces and components, allowing developers with limited or no hardware experience to benefit from adaptive computing technology. Adaptive SoC can achieve both AI processing and non-AI processing, which means that it can meet the processing needs of the overall application.
In addition, the adaptive SoC on the adaptive SOM supports a high degree of customization. Its design purpose is to integrate into a larger system and use predefined dimensions. Using adaptive SOM, you can fully utilize the advantages of adaptive computing, while avoiding chip design from scratch at the chip level. Adaptive SOM is only one part of the solution. Software is also key.
Companies that use adaptive SOM can benefit from a unique combination of performance, flexibility, and rapid development time. Without having to build their own circuit boards, they can enjoy the various advantages provided by adaptive computing-this advantage has only recently been realized at the edge with the introduction of Xilinx’s Kria™ adaptive SOM product portfolio.
Kria K26 SOM
The Kria K26 SOM is built on the top of the Zynq® UltraScale+™ MPSoC architecture and is equipped with a quad-core Arm® Cortex™-A53 processor, more than 250,000 logic units and an H.264/265 video codec. In addition, the SOM is also equipped with 4GB of DDR4 memory, 69 3.3VI/O and 116 1.8VI/O, enabling it to adapt to almost any processor or interface. With AI computing power of 1.4TOPS, compared with GPU-based SOM, Kria K26 SOM helps developers develop vision AI applications with lower latency and power consumption, and 3 times higher performance. This is a great boon for intelligent vision applications such as security, traffic and municipal cameras, retail analytics, machine vision, and vision-guided robots. By standardizing the core part of the system, developers have more time to concentrate on developing their own exclusive features, thereby achieving technological differentiation in market competition.
Unlike other edge AI products whose software can be updated but are limited by fixed accelerators, Kria SOM provides flexibility in two aspects, that is, both software and hardware can be updated in the future. Users can adapt I/O interfaces, vision processing and AI accelerators to provide support for some or all of the following applications: MIPI, LVDS and SLVS-EC interfaces; high-quality dedicated high-dynamic-range imaging algorithms for day or night; 8 Bit deep learning processing unit; or future 4-bit or even 2-bit deep neural network methods. The combination of multi-mode sensor fusion and real-time AI processing is now very easy to implement. It can be designed from the Xilinx KV260 visual AI starter kit and deployed to production through the Kria K26 SOM.
Kria KV260 Vision AI Starter Kit
Advantages for software and hardware developers
Adaptive SOM benefits both hardware developers and software developers. For hardware developers, adaptive SOM provides ready-made, mass-produced solutions, thereby saving a lot of development costs and development time. In addition, these devices also allow the hardware team to change the design later in the process, which SOM based on fixed-function chip technology cannot achieve.
For AI developers and software developers, adaptive computing is easier to apply than in the past. Xilinx has invested heavily in tool flow to ensure the ease of use of adaptive computing. By combining the hardware and software platform with mass-produced vision acceleration applications, the introduction of the Kria SOM product portfolio takes this ease of use to a whole new level. These turnkey applications eliminate all FPGA hardware design work, and only require software developers to integrate their custom AI models, application code, and selectively modify the vision pipeline. With the support of the Vitis™ unified software development platform and library, they can use familiar design environments such as TensorFlow, Pytorch or Caffe frameworks and C, C++, OpenCL™ and Python programming languages.
Through this new accelerated application paradigm for software design, Xilinx also launched the first embedded application store for edge applications, providing customers with a rich variety of Kria SOM application options from Xilinx and its ecosystem partners . The Xilinx solution is a freely available open source acceleration application, including smart cameras, face detection, natural language processing with intelligent vision assistance, and many other applications.
A flexible future
The AI model will continue to evolve and develop at a rapid pace. This means that the acceleration platform must be flexible to adapt to the best way to implement AI technology now and in the future. In fact, SOM provides an ideal edge processing platform. Combined with adaptive SoC, SOM provides a comprehensive, mass-produced platform for applications supported by AI. Companies that use such devices can benefit from a unique combination of performance, flexibility, and rapid development time, and reap huge returns from adaptive computing.
The Links: FP10R12W1T4-B3 AA104VB05