Embedded gets the edge | Imaging and Machine Vision Europe

2022-03-26 05:44:17 By : Mr. Jason Li

Share this on social media:

Image: Sergey Nivens/shutterstock.com

In some respects, the industrial vision market has provided embedded vision products for many years - from smart cameras to self-contained vision devices with onboard image processing. A new wave of products now follows the development of computer processors powerful enough to run vision algorithms, board-level system-on-modules from NXP, for example, or GPUs and FPGAs.

Embedded computer hardware is small, inexpensive, powerful and energy-efficient. Some chips are being designed specifically to work with neural networks. Attaching a sensor module brings image processing closer to the sensor; this is known as ‘edge processing’, as opposed to processing in the cloud or sending the feed from the camera to a separate computer.

Embedded computing opens the door to many more uses of vision technology. Even in factories where machine vision is already established and successful, there are areas where embedded vision can play a role.

The downside to developing products on embedded boards is that integration complexity and cost is higher in a relatively new market. The Khronos Group, together with the European Machine Vision Association, are working on an open API standard for controlling embedded cameras across mobile, industrial, XR, automotive and scientific markets. This new standard aims to help companies building embedded vision devices.

Among the camera vendors offering embedded vision products, Framos’ sensor module ecosystem now supports Maxim’s Gigabit Multimedia Serial Link, GMSL2, protocol. Cameras up to 8-megapixel resolution at 30fps can be connected over 15m of cable.

GMSL2 has a data rate of 6Gb/s with low cabling costs and low power consumption. Low latency can be realised at distances of up to 15m. Framos has integrated with Connect Tech’s GMSL camera platform. The expansion board allows up to four Framos cameras to be connected to the Jetson Xavier module interfacing with the GMSL2 protocol. The cameras are powered by Power over Coax - data, control signals and power are sent through a single coaxial cable. The ability to connect any Framos image sensor to the Nvidia Jetson platform via GMSL2 over long distances enables new possibilities and applications.

Vision Components has released a SerDes adapter board so that coax cables up to 10m and longer can now be used with the Mipi interface. The company’s Flexible Printed Circuit (FPC) cable is available in three standard versions with 15, 22 or 24 pins, in lengths of 60mm, 100mm or 200mm. Transfer rates of 6Gb/s are possible, corresponding to 750MB/s at an 8-bit pixel resolution.

The FPC cables have a trigger input and a flash output. These enable both video streams and single image acquisition; they do not affect the host system. Settings such as shutter speed, position and binning, gain and image size can be individually adjusted for each image. Using the cables with an external trigger, very fast processes can be detected and synchronised. Vision Components has also developed repeater boards that enable up to five times longer transmission paths.

Vision Components’ Flexible Printed Circuit cable

Vision Components also now offers an FPGA-based accelerator for edge preprocessing of image data. It has multiple Mipi-CSI-2 inputs and outputs and can merge data from several Mipi cameras.

The latest Quartet TX2 carrier board from Teledyne Flir enables streaming from four USB3 board-level cameras simultaneously and at full bandwidth. It is ideally placed for space constrained applications, eliminating the need for peripheral hardware and host systems. The Quartet TX2 embedded solution comes pre-integrated with the company’s Spinnaker SDK.

Basler and Variscite both offer solutions based on the NXP i.MX 8 applications processor series, as well as various evaluation kits. For its development kits destined for multimedia use, Variscite employs Basler embedded cameras so that companies can quickly get started and test vision-based applications.

Coinciding with the announcement of NXP’s latest applications processor, the i.MX 8M Plus, Basler released a reference camera module that matches the system-on-chip (SoC). The company also recently launched an industrial-grade embedded vision processing board with vision optimised interfaces based on a Variscite system-on-module (SoM). Various camera types can be connected to the SoM - it is suitable for prototyping and series production.

Also working with NXP boards, Thine Solutions recently introduced its second Theia-Cam family kit - the THSCM101, a 13-megapixel phase-detection auto-focus (PDAF) linux camera reference design kit. The kit interfaces to NXP’s i.MX 8M family and is based on Thine’s THP7312-P image signal processor and Sony’s IMX258 13MP CMOS PDAF image sensor. It includes a camera board, a mini-SAS cable to connect the camera to the NXP i.MX 8M’s CSI port and all the required software to stream images. The kit is plug-and-play, including firmware optimised for most of V4L2 camera functions.

The German firm Phytec also concentrates on NXP solutions for its embedded vision offerings. It has 26 years’ experience building solutions.

Other new embedded cameras include the Nicla Vision 2-megapixel camera from Arduino. It measures 22.86 x 22.86mm, has a powerful dual processor and is compatible with Portenta and MKR components. It integrates with OpenMV, supports MicroPython and features wifi and Bluetooth low-energy connectivity.

E-con Systems’ latest launch is the See3Cam_CU135M, a 13-megapixel monochrome USB 3.1 gen 1 camera. It is based on the 1/3.2-inch AR1335 monochrome CMOS image sensor from Onsemi. This camera comes with an auto-exposure feature; an absence of colour filter arrays results in high quantum efficiency in both visible and near infrared regions, compared to cameras with RGB colour filters.

On the software side, Irida Labs recently launched PerCV.ai, a software and services platform that supports the full vision-AI product lifecycle. PerCV.ai integrates machine learning models for people, vehicles, or any type of object detection together with vision system design, data management and deployment tools for on-device intelligence. The platform is suitable for companies looking for a vision application-as-a-service using COTS HW components, as well as those aiming to develop new vision sensor products.

Available from MVTec Software is a plugin for Intel’s OpenVino toolkit. This will enable users of MVTec software products to benefit from AI accelerator hardware that is compatible with OpenVino. The toolkit from MVTec is based on the Halcon AI accelerator interface - supported AI accelerator hardware can be used for the inference part of deep learning applications.

The AMD Xilinx Kria system-on-module portfolio, was launched last year. First to market was the Kria K26, specifically targeting vision AI applications in smart cities and factories. The Kria K26 SoM has 1.4 tera-ops of AI compute and is built on top of the Zynq UltraScale+ MPSoC architecture, featuring a quad-core Arm Cortex A53 processor, more than 250,000 logic cells and a H.264/265 video codec. The SoM also features 4GB of DDR4 memory and 245 IOs for connecting to virtually any sensor or interface.

Following the above launch, Pinnacle Imaging Systems announced it would offer its Denali 3.0 HDR ISP with a new HDR sensor module for the Kria K26 SoM and KV260 vision AI starter kit. Pinnacle Imaging also offer an IAS HDR sensor module paired with and tuned for the KV260 vision AI starter kit. This sensor module is based on the On Semiconductor AR0239 CMOS imager, capable of capturing up to 120dB/20EV of dynamic range at 1080p/30 full HD resolution.

Like Pinnacle, Hema Electronic integrated AMD Xilinx Kria SoMs into its embedded vision platform, adding modules specifically designed for use in areas such as machine vision and smart cities.

Vendors of embedded computing kits include Gidel with their FantoVision 20, a small and robust embedded computer tailored for high-throughput image acquisition and processing. It allows image processing, compression and recording of video streams with up to 20Gb/s in real time. Its architecture combines an Nvidia Jetson Xavier NX embedded computer with an Intel Arria 10 FPGA for frame grabbing with real-time AI and image processing.

The Cincoze Gold series is a range of GPU computers designed to meet the needs of large-scale image processing, machine vision and machine learning applications in AIoT. The series includes the GP-3000 and GM-1000, providing a range of sizes, performance, I/O and functionality.

Advantech offers a series of solutions for embedded computing powered by Nvidia GPUs. Built on the Nvidia Ampere architecture, the latest Sky-MXM-A2000 is an industrial-grade GPU-accelerated solution designed to deliver the latest RTX technology. It features real-time ray tracing, accelerated AI, advanced graphics and high-performance computing capabilities.

Finally, Congatec offers 12th-generation Intel Core mobile and desktop processors (formerly Alder Lake) on 10 of its COM-HPC and COM Express computer-on-modules. COM Express type six form factors and the modules in COM-HPC size A and C provide major performance gains and improvements based on Intel’s performance hybrid architecture. The company also recently extended its i.MX 8 ecosystem with a new starter set for AI accelerated embedded vision applications. The new set contains an entire ecosystem for developers to start designing applications. At its heart is the Smarc 2.1 computer-on-module Conga-SMX8-Plus.

This is not an exhaustive list. If you provide embedded vision products and would like your company to be included, please email editor.imaging@europascience.com.

Machine vision camera suppliers reduce time to market with transport layer IP Cores. The core competency of engineers designing machine vision cameras and systems is usually configuring the core camera features to provide the best possible image while meeting size, weight, power budget and others. But they also have to devote considerable time and effort to successfully streaming the images from the camera to the host. Leading edge vision transport layer standards like GigE Vision, USB3 Vision and CoaXPress are complex and evolving, so several months of work by experienced protocol engineers are typically required for interface design.

S2I’s Vision Standard IP Cores solutions are delivered as a working reference design, alongside compact FPGA IP cores fully tested against a wide range of popular frame grabbers and image acquisition libraries. IP Cores enable machine vision companies to build FPGA-based products following these standards, delivering the highest possible performance in a small footprint while minimising development time.

More information: www.euresys.com/en/Products/IP-Cores/Vision-Standard-IP-Cores-for-FPGA

CXP-12 is the latest CXP standard, now transmitting video at 12.5 Gb/s. While the speed of data through the frame grabber has doubled, the overall architecture has remained the same as the previous generation Cyton.

This allows users to easily migrate to the newer cameras without major software changes. The Claxon-CXP4 is a quad CXP-12 PCIe Gen 3 frame grabber. It supports multi-link CXP-12 cameras of up to four links. The Claxon CXP4-V (shown here) is designed for small form factor PCs with little to no airflow.

More information: www.bitflow.com/products/coaxpress/claxon-cxp4-v/

Vision Components MIPI modules can be connected to various embedded processors, including Nvidia Jetson boards

Andreas Franz, CEO of Framos

Image: Martial Red/shutterstock.com

The GrAI Matter Labs team winning the award in Stuttgart. Credit: Landemesse Stuttgart GmbH

Image: Framos and GrAI Matter Labs

Image: Sergey Nivens/shutterstock.com

A roundup of some of the latest embedded vision technology

Glass inspection normally involves different illumination methods. Credit: MSTVision

Tim Hayes provides a window into how to find defects in glass

Mathias Bochow, GFZ Helmholtz Centre, Potsdam, is working on the Trace project to track marine plastic. Credit: Frank Schweikert, Aldebaran Marine Research & Broadcast (www.aldebaran.org/en/)

Abigail Williams speaks to scientists tracking marine plastic using satellite spectral imagery

Tim Reynolds finds out how vision and AI algorithms are making cities safer

Anne Wendel, director of VDMA Machine Vision

Anne Wendel, director of VDMA Machine Vision, on how the mechanical engineering sector could be affected by the war in Ukraine

Greg Blackman examines the importance of Tower foundries to machine vision sensor firms, following Intel’s acquisition