Distribution Warehouse With Automated Guided Vehicles And Robots Working On Conveyor Belt
25 Sep 2025

Embracing a safety-first approach that addresses the inherent complexities of AI systems

The European Union's new Machinery Regulation has introduced a term that has sparked considerable debate among AI practitioners and legal experts: "self-evolving behaviour." This seemingly straightforward concept carries significant implications for how AI systems are classified, regulated, and deployed across the EU. However, the regulation's interpretation of this term extends far beyond what many technologists might initially assume.

The Regulatory Gray Area

The Machinery Regulation does not provide an explicit technical definition of "self-evolving behaviour," leaving industry professionals to navigate a complex interpretive landscape. This ambiguity has led to a crucial question that affects virtually every AI system deployed in machinery applications:

Does the regulation only address AI systems that continue to learn after deployment, or does it also encompass the more common scenario of AI/ML systems that adapt their parameters during training before deployment?

The distinction is critical because the vast majority of deployed AI systems fall into the latter category. Most machine learning models undergo extensive training phases where they continuously adapt their parameters based on training data, but once deployed, they operate as static systems with fixed parameters. Only a small subset of AI applications employ truly continuous learning mechanisms that update model parameters in real-time during operation.

The Capability-Based Interpretation

Research into the regulatory framework and expert interpretations suggests that the Machinery Regulation takes a capability-based approach rather than focusing solely on the timing of learning. This interpretation fundamentally shifts how we understand "self-evolving behaviour" in the regulatory context.

Under this framework, a system trained before deployment but designed with the inherent capability to adapt its behaviour based on that learning would still be considered to have "self-evolving behaviour." This means that even "static" ML models that have been adaptively trained but then fixed and deployed fall under this regulatory umbrella.

The regulatory focus appears to be on the system's inherent adaptability and behavioural flexibility, rather than whether learning occurs continuously post-deployment. This interpretation recognizes that the potential risks and unpredictabilities associated with AI systems stem not just from ongoing learning, but from the fundamental adaptive nature of machine learning algorithms.

Implications for AI Safety Components

This broader interpretation has significant ramifications for AI/ML safety components across industries. Traditional safety systems that incorporate machine learning components—even those with static, pre-trained models—would generally fall under the "self-evolving behaviour" classification regardless of whether they continue learning after deployment.

Consider a safety monitoring system in manufacturing equipment that uses a pre-trained neural network to detect anomalies. Even though the model's parameters remain fixed during operation, the system's ability to recognize and respond to previously unseen patterns based on its training represents a form of evolved behaviour that distinguishes it from traditional deterministic safety systems.

The Technical Reality vs. Regulatory Perception

From a technical standpoint, the distinction between systems that learn continuously and those that learned during training is significant. Continuously learning systems present unique challenges related to model drift, catastrophic forgetting, and the need for ongoing validation. However, the regulatory framework appears to recognize that even static AI systems exhibit behaviours that emerged from adaptive learning processes, making them fundamentally different from traditional programmed systems.

This perspective acknowledges that the unpredictability and complexity that regulators seek to address are inherent characteristics of systems that have undergone machine learning processes, regardless of whether that learning continues post-deployment. The emergent behaviours and non-linear decision boundaries that characterize trained ML models represent a form of evolution that occurred during the training phase but manifests during operation.

Practical Considerations for Industry

For organizations deploying AI systems in machinery applications, this interpretation means that most machine learning-based components will likely require compliance with the enhanced safety requirements associated with "self-evolving behaviour." This includes systems that:

  • Use pre-trained neural networks for classification or prediction tasks
  • Employ ensemble methods that combine multiple learned models
  • Utilize transfer learning from foundation models
  • Implement reinforcement learning algorithms, even if ‘frozen’ post-training

The regulatory approach appears to prioritize safety and risk management over technical distinctions about when learning occurs. This means that organizations should prepare for enhanced documentation, testing, and validation requirements for any AI system that exhibits learned behaviours, regardless of the timing of that learning.

Looking Forward

The capability-based interpretation of "self-evolving behaviour" reflects a pragmatic regulatory approach that focuses on risk management rather than technical implementation details. While this may create compliance challenges for organizations using AI in machinery applications, it also provides a more consistent framework for addressing the fundamental characteristics that distinguish AI systems from traditional programmed systems.

As the regulatory landscape continues to evolve, organizations should prepare for a future where the sophisticated behaviours enabled by machine learning—whether learned continuously or during training—are subject to enhanced safety and validation requirements. The key is not whether a system continues to learn, but whether it exhibits the adaptive, emergent behaviours that characterize modern AI systems.

This interpretation ultimately recognizes that the regulatory challenges posed by AI systems stem from their fundamental nature as learned systems rather than from the specific timing of their learning processes. For practitioners in the field, this means embracing a safety-first approach that addresses the inherent complexities of AI systems, regardless of their deployment characteristics.

Pierrick Balaire headshot
Pierrick Balaire

Global Business Director

Pierrick Balaire is a Global Business Director specializing in industrial machinery and energy with comprehensive expertise including ASTA certification (LV to HV type test) and strategic business development. He develops tailored plans for assigned industries, focusing on portfolio optimization and profitable growth while collaborating with key functional leaders on competitive positioning and new business initiatives.

You may be interested in...

Machinery Regulation (EU) 2023/1230

Quickly and efficiently demonstrate compliance with the Machinery Regulation (EU) 2023/1230 for the European market. Get the information and support you need so you can get your products to market faster.

Intertek AI²

Ensure the quality and safety of AI systems and devices with an end-to-end AI assurance programme.

You may be interested in...