Machine vision and control in automation systems

For the first time, AI-based machine vision technology has been seamlessly incorporated into an automation system, making it possible for the information from vision ­systems to provide advanced, real-time ­machine control.

news

7min

2025-10-12

01 Information from the new vision system can be fed into control loops in real time. The Flash Controller enables perfect synchronization of light and motion.

As production processes become increasingly reliant on real-time feedback from imaging-based inspections, the importance of machine vision is steadily growing. However, the potential of advanced machine vision systems has been constrained by inadequate integration into machine control systems. In effect, many machines have effectively been flying blind, with machine vision and machine control living in different universes. As a result, incorporating a machine vision system into an application remains an extremely complex task, especially if the machine control system is also handling functions such as safety technology, motion control, robotics and computer numerical control (CNC). Now, however, B&R (a member of the ABB Group responsible for machine and factory automation), has enhanced its Smart Camera portfolio with powerful deep-learning functionalities that bring machine vision and control together.

 

01 Flash Controller … faster, 
more accurate, more flexible

Exact synchronicity

  • Short exposure time to avoid image blur
  • Increase accuracy

Rugged design 
for decentralized mounting without compromising performance (4x20A)

 

Incredibly fast flash current rise times
1µs flash with 200ns rise time

 

Cost saving
Single wire without additional need for a trigger sensor, Daisy chain wiring, shorter cable lengths

01 Information from the new vision system can be fed into control loops in real time. The Flash Controller enables perfect synchronization of light and motion.

Built-in machine vision

 

The latest AI-based Smart Camera models bring advanced artificial intelligence directly into the machine control loop. Thanks to their integrated edge AI capabilities, the new cameras enable real-time vision processing and rules-based inspection, without interrupting production – meaning that for the first time, machine vision technology has been seamlessly incorporated into an automation system »01, thus replacing simple machine vision sensors and PC-based vision tasks with edge-based applications.

 

The unprecedented depth of integration provided by the above-mentioned developments opens new possibilities that go far beyond quality inspection. For instance, information from a vision system can now be fed into control loops in real time to provide advanced machine control, thus allowing a camera to synchronize »02 with axis movements with microsecond precision. All hardware components require only a single cable (though a second hybrid connection would be needed to enable daisy-chain cabling with other vision components).

 

A vision system comprises a lighting system »03, cameras and intelligent image-processing algorithms. An AI Smart Camera supports a full suite of AI-based vision functions »04, including anomaly detection, optical character recognition (OCR), and object detection and classification. These can be combined with rules-based algorithms, enabling users to balance the flexibility of AI with the speed and precision of conventional vision. This hybrid approach can achieve complex inspection tasks such as identifying product types, detecting subtle defects and verifying printed codes – all in a single pass and with a single device.

02 B&R’s new precision in motion: When robots, cameras, and conveyors move as one, production flows without pause.

 

03 Why accurate detection and analysis depend on ideal lighting

 

Because machine vision systems require optimal lighting for inspection, identification and measurement, B&R’s factory-calibrated system improves imaging repeatability by a factor of 10 or more, enabling high-quality input for deep-learning models, more accurate detection, fewer false positives, and better long-term performance.

 

Lighting elements are available that are integrated in a camera or as external devices that are synchronized with image capture. This ensures that even rapidly-moving objects or objects affected by machine vibrations are perfectly illuminated with maximum precision and strobe intensity. With the strobe controller integrated directly into the lights, no additional hardware is required.

 

Automatic lighting modulation prevents stray light and other difficult lighting conditions from compromising performance. It also makes it easy to achieve extremely precise synchronization for high-speed image capture or accommodate object-specific requirements such as bright-field or dark-field illumination.

 

B&R is at the stage where large numbers of customers across many market sectors are trialling the new system. Setting up systems from scratch and working through the machine learning phase can take up to 12 months.

During this development stage, customers are reluctant to announce their activities to avoid losing competitive advantage.

 

Markets for fast-moving consumer goods like toothbrushes are fiercely competitive. Tufting machines, used to produce the bristles, are highly synchronized by CNC machines. A driver unit runs a complex mechanical assembly that collects a bundle of bristles from a vertical feeder unit, together with a bonding wire, and embeds them in the defined hole in the toothbrush, which has been precisely positioned by a rotating turret.

 

Incorporating CNC and logic in a machine maximizes speed and productivity. Integrating machine vision to identify faults in real time adds further to productivity.

 

Image »03a shows the difference in resolution by using machine ­vision with flash-light, as compared with the absence of ideal light »03b. For this application, a deep-learning tool was loaded with predefined parameters that represent a failure. During the training phase, 60 good images were shown to it to discover a full range of production errors.

03 Integrated machine vision with flash-light (left) and Machine vision without a flash-light (right)

04 AI Smart Camera features

  • Real-time automation and machine control
  • Brings AI into the control loop
  • Finds complex errors with only a few good images and post processing
  • Can detect complex errors in (and can differentiate between) local and global anomalies such as a bottle’s label color and the color of its liquid content
  • High-performance AI-based failure detection applications can be set up within days instead of weeks
  • Synchronized within 1µs
  • Power consumption only 2 W to inference /execute the DL Model
  • Repeatability 10 times better than predecessors
  • Can seamlessly switch and apply new AI models

 

mapp Vision Technology Framework

mapp Vision is a set of hardware components and software tools integrated into B&R’s automation platform. It is specifically designed for use with the company’s vision cameras. It includes both hardware and software, such as the graphical interface mapp Vision HMI (human machine interface) for visualization and control, as well as other functionalities for image acquision, processing and analysis. mapp Vision is designed to seamlessly integrate with B&R’s Automation Runtime and Engineering Tool Automation Studio, allowing for tight control and coordination between vision tasks and other automation processes.

 

Using mapp Vision, control programmers can carry out numerous machine vision tasks themselves with minimal programming – no separate process variables are required. Components communicate intuitively with one another, meaning that only a few clicks are needed to integrate the images captured by a Smart Camera into an HMI application.

 

Camera, lighting parameters and trigger conditions can all be changed on the fly, making product changeovers and other runtime adjustments easy to implement. Furthermore, since the application is also stored on a controller, no data is lost if the camera is replaced. Vision specialists need not be consulted unless a situation arises that requires their specific expertise, such as handling difficult lighting conditions. Of particular importance is the fact that B&R’s vision system makes it easy to link multiple machines together without loss of stability or quality.

 

 

Sub-microsecond synchronization

Combining synchronized AI and rules-based vision in a single camera allows manufacturers to optimize high-speed inspection, sorting and handling tasks, thus cutting waste and boosting profitability. B&R’s patented fieldbus integration makes this possible by connecting machine vision to the control loop, which enables tight synchronization with motion control and robotics, while also handling HMI communication. This helps optimize processes in which reactions and adjustments need to happen at full production speed.

 

Trigger signals can now come directly from the controller or motion application. Thanks to sub-microsecond precision, image triggers and lighting controls are synchronized with the overall automation system. This opens a new world of opportunities in which dynamic applications with frequently changing speeds no longer require a separate encoder on the camera input.

 

With this in mind, B&R has developed a new just-in-time (JIT) compiler that generates executable machine code when the application is loaded, rather than interpreting it later at runtime. Combined with a new quad-core processor, this reduces the processing time needed for measurement tasks by 75 percent without needing to invest in expensive dedicated PCs.

 

05 Deep Learning: AI is embedded in the control loop to detect production trends in real time, thus enabling optimization of manufacturing.

From anomaly detection to deep learning

 

Industrial AI aims to enhance operational efficiency by automating repetitive tasks, improving accuracy by reducing human error, and enabling real-time decision-making based on data-driven insights. These functions extract valuable information from the large amounts of data generated by manufacturing processes, with the goals of identifying areas for optimization, reducing waste and increasing efficiency, all of which are achieved by analyzing data from sensors, machines and systems. Advanced analytics techniques include predictive modelling and anomaly detection. Added to these is artificial intelligence, which steers machines toward handling tasks that have previously required human intelligence, such as reasoning, problem solving, and decision making.

 

An anomaly is an event or item that deviates from what is expected. Anomaly detection is used in a range of fields, including detecting bank fraud, medical diagnostics, surveillance systems and autonomous driving systems. The anomalies in manufacturing that can occur in products are usually random, such as changes in color or texture, scratches, misalignment, missing pieces, or errors in proportions. The frequency of an anomaly is (hopefully!) likely to be low, in comparison to the frequency of standard events.

 

Anomaly detection allows engineers to detect and eliminate from a production line those parts or elements that are unacceptable, thus reducing costs and improving quality. Anomaly detection is useful in quality control systems but remains a major challenge for machine learning.

 

Humans are very good at recognizing whether an image is similar to what they have previously observed or whether it is something novel or anomalous. Machine learning systems, on the other hand, have shown difficulties with such tasks, primarily due to the limited availability of anomaly samples. Current detection methods have also suffered from reliance on complex network architectures. Therefore, it is not surprising that a significant amount of interest has been focused on optimizing machine learning methods for anomaly detection.

 

Visual anomaly detection (VAD) focuses on spotting irregular patterns or unexpected deviations from established normality in visual data »05. This process is known as deep learning and has recently started to play critical roles in a range of fields, as mentioned above. This has also laid down a marker for industrial image anomaly detection (IAD). Using rules-based algorithms, a camera learns what “good” looks like, allowing it to quickly recognize any deviations.

 

Indeed, B&R’s anomaly detection time is 60 ms, and the company claims that when its Visual Anomaly Detection (VAD) system is equipped with a processor from partner company Hailo, a leader in intelligence processing units (AI processors) it is considerably faster than comparable products. A related measurement parameter is inference time, which in machine learning parlance refers to the amount of time it takes for a trained model to generate a prediction or output when presented with new, unseen data.

 

Using industrial image processing partner company MVTec’s Anomaly Detection Method [1,2,3,4,5,6], existing images are taken from a Smart Camera. Between 30 and 100 images are likely to be needed to train the system, depending on the complexity of the failure case. No special hardware is needed, and the procedure is simulated offline using B&R’s mapp Vision software.

 

However, when it comes to embedding AI in manufacturing processes, things can become challenging, if not daunting, since this step requires an understanding of business objectives and the integration of AI with existing processes and systems. On a local level, the primary challenge is getting a model to be stable and reproducible. This typically involves using unsupervised or semi-supervised learning techniques and robust statistical methods. Deep-learning models can also be computationally expensive.

 

06 Once powered by AI, control loops and the machines they manage may have sufficient information to run themselves.

Stepping into the future

 

The innovations described in this article offer control system processing times of between 60 to 200 ms and will cover the majority of applications. However, there are some applications, notably in the food and beverage industry, that need even faster processing. B&R is developing a technology that is expected to provide a control loop processing time of under 10 ms.

 

Another area being investigated is training-free anomaly event detection using a technique called Large Language Model Symbolic Pattern Discovery [6]. This technology is a simplification of not only being able to detect a problem, but of also being able to identify its root cause, even if the causes lie somewhere else in a process. In truth, users can already do this if the implementation is rules-based. The question is: will such systems be able to predict failures before they happen and fix them during planned maintenance to avoid unscheduled downtime?

 

Machine vision is steering us relentlessly towards the so-called dark factory »06, where autonomous machines can manage themselves, including responding to unforeseen issues. This is the strength of AI – when all the information can be fed into a control loop, machines may have all the information they need to run themselves! The adage of a machine being run by a man and a dog – with the dog there to prevent the man from interfering with the machine, and the man there just to feed the dog – may not be too far away!

References

 


Bergmann Paul, et al. The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. International Journal of Computer Vision, 6 January 2021, Vol 129, pp. 1,038 – 1,059.

 

Yu Jiang et al. A Machine Vision-based Realtime Anomaly Detection Method for Industrial Products Using Deep Learning. Chinese Automation Congress (CAC), 22-24 November 2019. Added to IEEE Xplore on 13 February 2020. DOI: 10.1109/CAC48633.2019.8997079 (paywall).

 

Fengqian Ding et al. PFEAL: A Novel Framework for Image Anomaly Detection Using Pre-Trained Feature Extraction and Activation Learning. IEEE Transactions on Emerging Topics in Computational Intelligence. 13 June 2025, pp 1 – 12. Electronic ISSN: 2471-285X. DOI: 10.1109/TETCI.2025.3577485 (paywall).

 

Jiaqi Liu et al. Deep Industrial Image Anomaly Detection: A Survey. Machine Intelligence Research. Volume 21, pp. 104 – 135. 15 January 2024

 

Jaroslav Kovar, B&R Community Lead. Innovation 2025: Anomaly Detection for Machine Vision. Available: https://community.br-automation.com/t/innovation-2025-anomaly-detection-for-machine-vision/5580

 

Yuhui Zeng et al. Huazhong University of Science and Technology. Training-free Anomaly Event Detection via LLM-guided Symbolic Pattern Discovery. Available: https://arxiv.org/html/2502.05843v1#S5. Accessed July 18, 2025.

Stay ahead in innovation & technology

Get our latest news, insights, and breakthroughs straight to your inbox.