Artificial intelligence: ISO 42001 safeguards the potential
The European AI Regulation for the trustworthy use of artificial intelligence (AI) has been in force since August 1, 2024. Companies must comply and meet the requirements by August 2026. The new standard ISO 42001:2023, which is specifically designed for AI, already provides a comprehensive framework for the safe use of AI systems in the value chain.
Data is never static. However, the fact that machines can recognize patterns in data autonomously and generate new content from them independently for tasks is a new dimension of scalability in data processing. Geoffrey Hinton, one of this year’s Nobel laureates in physics, compares machine learning to the Industrial Revolution. It is his discovery of artificial neural networks in the 1980s that has given rise to the applications of generative artificial intelligence (AI) such as ChatGPT today. But he also warns strongly of the risks: “We have no experience of what it’s like to have things smarter than us.”
According to the OECD report on artificial intelligence in Germany, in 2023 about 12 percent of companies used at least one AI system, which is above the EU average of 8 percent. More than a third of large companies are already using AI technologies. In the case of medium-sized and smaller companies, this is the case for 16 and 10 percent respectively. While many companies have a growing interest in AI products, concerns still prevail in many places, especially with regard to risk assessment, data protection, social discrimination, governance and transparency.
European AI Act
In order to strengthen the AI landscape in Europe, the EU Regulation on artificial intelligence (AI Act) entered into force on August 1, 2024. After transposition into national law, most of the requirements will apply from August 2, 2026, and those for highly critical applications may apply as early as 2025. The legal framework is the first of its kind in the world because it addresses the trustworthy use of AI systems before they are placed on the market. The focus is on compliance with fundamental rights, security aspects and ethical principles.
The regulation builds on a three-step risk approach to prevent counterfeiting, manipulation and social exclusion through AI. A distinction is made between low-risk, high-risk and highly critical systems. Especially high-risk AI systems are strictly categorized as unacceptable and corresponding applications are generally banned.
Minimal risk applications include, for example, AI-based spam filters, product recommendations, language or text conversions for translations, office organization tools or video games. Since users can in principle recognize or correct incorrect content themselves, the AI Act does not require any special due diligence obligations that go beyond the usual rules of IT and information security. However, a voluntary introduction of codes of conduct is mentioned in this context.
Users need to be able to clearly recognize that they are interacting with AI systems. This applies to all synthetic audio, video, text and image content generated by AI in the various risk classes. Consequently, the AI Regulation requires extensive transparency and labeling obligations, for example when chatbots used for customer service information combine real data with machine-generated market information or forecasts.
Failure to comply may result in substantial fines for companies: Up to 7 percent of global annual revenue for violations with prohibited AI applications, up to 3 percent for violations of other obligations, and up to 1.5 percent for the transmission of false information.
Limits of the AI Act
Although the transparency requirements of the AI Act relate primarily to high-risk applications, the use of systems that are at the moment classified as less risky can also have critical consequences. AI applications inherently lack transparency due to their machine-learning associative data processing and the retrieval of hidden data elements in the artificial neural networks. Scientists point out that it is not technically possible to find out the “real” reason for decision when generated content comes from different, dynamic contexts. Consequently, retroactive declarations or retroactive traceability also have their limits. Therefore, the risk and quality framework must start at an early stage – at the source of the data provision.
At the beginning of a successful and safely deployed AI system, the question arises how quality-assured training can keep the data manageable in the value chain and how to intervene in the event of premature results. This also concerns ethical aspects. For example,if cultural prejudices already influence the data selection, these characteristics will also continue in the training of the machine and in the algorithms that set it up.
AI management system according to ISO 42001:2023
With the spread of AI, companies are increasingly caught between the partial critical risks to the business model and the multiple opportunities for increasing efficiency and productivity. A proven solution here are management systems with actively controlled processes for continuous risk assessment and risk reduction through transparency, accuracy and regulated responsibilities. Especially when using artificial intelligence, it can be seen that applications are often tested according to a bottom-up principle. Even if AI initiatives are initially used experimentally in individual corporate functions, the inherent AI risks can still diffuse into the entire organization. Consequently, the centralization of quality processes (top-down principle) is important. This is where the ISO standard family 4200x published in December 2023 comes in.It offers a comprehensive testing framework that is specifically geared to the use of AI technologies.
In contrast to the AI Act, the standard not only covers critical, particularly high-risk AI systems, but addresses any application at all levels of value creation. In this way, by establishing an AI management system (AIMS), companies can prevent individual teams from experimenting with AI-supported pilot applications and integrating them into workflows without assessing the quality and origin of the data used, and thus avert the resulting effects for the entire company.
The current version, ISO 42001:2023, is based on the high-level structure of the proven management systems of other globally established standards such as ISO 27001 for information security. This means that AIMS is also based on the following sections: context of the organization, leadership, planning, support, operation, evaluation of performance, improvement. The structure of the standard makes it possible to cover the entire lifecycle of an AI product: from data acquisition and training of the data, to supplier management and decommissioning.
Competitive advantages
In the ongoing digital transformation, companies certified according to ISO/IEC 42001:2023 can position themselves as responsible providers. Because the standard is not designed such that it only looks at specific risk classes but for the entire organizational environment, AI systems that are classified as uncritical remain under observation. AIMS coordinates and evaluates all relevant ethical and legal aspects with technical data analysis. The result is comprehensible and traceable documentation as a reliable basis for performance evaluation. This continuous improvement and quality assurance is essential to safely be able to integrate AI into business processes in a way that increases productivity and minimizes the potential operational and liability risks. In a rapidly developing AI landscape, these are important competitive advantages.