VDMA on EU-AI-consultation: Use existing laws instead of new AI regulation

shutterstock_755582842

AI in industry now needs no further regulation, but rather an objective handling of the risks.

The existing laws are basically suitable for this purpose. This is the core message which  VDMA has introduced into a public consultation of the EU Commission. At the same time,  VDMA is presenting a comprehensive position paper on the topic of "AI in industry", in which the need for political action is analysed from the perspective of the mechanical engineering industry.

The EU survey referred to the EU Commission's White Paper on Artificial Intelligence, which was intended to kick-off a debate on a possible design for a framework for AI in Europe. The debate focuses on two main questions: How can AI excellence in Europe be promoted in industrial policy terms? How can trust in AI be increased by a legal framework that addresses the risks of AI? The answers to these questions will be the basis for further industrial policy and legislative steps by the EU Commission.

Need for objectivity concerning risks of AI

VDMA fully supports the initiative to shape AI on a European level and not to allow a patchwork of national legislation to emerge. On the other hand, however, premature regulation must not lead to a situation where the widespread use of artificial intelligence is hindered by new barriers and developers and users face new uncertainties. VDMA therefore advocates for an objective approach to the risks of AI-based solutions and urges caution with regard to new legislation.

For the EU Commission, AI characteristics such as opacity, unpredictability or autonomy are new risks which the legislator. Yet in industrial use, AI is under human supervision and only acts to a limited extent autonomously. AI which is embedded in machines has very little to do with these risk-related characteristics and, for example, does not represent a new safety risk: This is because the safety requirements for machines are already formulated today in a technology-neutral way and also apply to machines with AI elements. In VDMA's view, a precise analysis of the actual autonomy and learning ability of machines is therefore a prerequisite for assessing the risk and the need for legislation.

 

Don't regulate technology, regulate the risks of application

In the view of VDMA, a new, horizontal regulation for AI technologies is currently not justified. Instead, it should be observed and examined whether the existing regulation has gaps and needs to be improved. If new laws are demonstrably necessary, technologies should not be regulated, but rather the concrete effects of critical AI applications, such as discrimination, should be addressed. Otherwise, there is a risk that legislation will hinder innovation and that laws will have to be repeatedly amended as technology advances.

Currently VDMA does not consider a readjustment of the product liability regulations to be necessary. Further analysis, observation and examination is required before a proven liability regime is changed. A distinction should be made between products with embedded AI and pure AI software. These areas differ fundamentally in terms of security legislation.

 

On the basis of the position paper, VDMA will actively participate in the upcoming debate. Please find the current position paper here attached.

Downloads