The risk-based classification system outlines AI compliance requirements for companies and includes a timeline, with the next key date set for 2 August 2025.
The AI Act is nothing new, but the next compliance milestone is fast approaching and is not to be missed by European businesses. As the first comprehensive AI law to regulate the use of artificial intelligence, the overall aim is to make sure that AI systems used in the EU are safe, transparent and non-discriminatory. The risk-based classification system outlines AI compliance requirements for companies and includes a timeline, with the next key date set for 2 August 2025. Although some companies and countries requested to postpone this deadline, the EU Commission declined, reaffirming the importance of adhering to the established schedule.
What will be new from August 2025?
1. Notified authorities and notified bodies in high-risk AI systems (Chapter 3, Section 4)
The EU AI Act requires each member country to designate at least one authority responsible for overseeing and assessing AI conformity bodies.
Notified bodies play a crucial role in evaluating certain high-risk AI systems before they are allowed on the EU market or are put into use. Their responsibility is to verify that these systems meet all the relevant requirements of the AI Act, including risk management, transparency, and data quality.
2. New rules for GPAI models (Chapter 5)
The AI Act defines a general-purpose AI model as a powerful type of artificial intelligence that’s trained on huge amounts of data. These models can handle a wide range of different tasks – not just one specific job – and they’re built to be used in many different apps or systems. For example, large language models are considered general-purpose because they can be adapted to all sorts of uses including:
- Preparation of and maintaining detailed technical documentation
- Sharing information with downstream AI system developers
- Publishing summaries of the data used to train the model (where feasible and appropriate).
To support compliance, a Code of Practice was published on 10 July 2025. Member States and the Commission are currently assessing its adequacy. The Commission will also complement this new code with guidelines on key concepts related to general-purpose AI models. It is expected these will be published by the end of July 2025.
3. Governance (Chapter 7)
The EU is creating an “AI Office” to improve its knowledge and skills in AI. This office will be supported by EU member countries, who will help it carry out its duties as outlined in the regulations.
The European Union is also setting up a European Artificial Intelligence Board. This board will include observers from the European Data Protection Supervisor and the AI Office. The representatives will be responsible for coordinating AI regulation in their home countries and will adopt the board’s rules by a two-thirds majority.
Each EU member state must establish or designate at least one authority to oversee the implementation of the AI regulation. These authorities must operate independently and without bias. The authorities are also responsible for ensuring cybersecurity and confidentiality.
4. Confidentiality (Article 78)
The EU AI Act states that all parties involved in applying the regulation must respect the confidentiality of information and data they obtain. This includes protecting intellectual property rights and trade secrets. They can only request data necessary for assessing the risk of AI systems and must have cybersecurity measures in place to protect this information. The data must be deleted once it’s no longer needed. Information exchangedconfidentially cannot be disclosed without prior consultation.
5. Penalties (Chapter 12)
Member countries must establish rules for penalties and enforcement measures for violations of the Act. These penalties must be effective, proportionate, and dissuasive, and consider the interests of small and medium-sized enterprises (SMEs) and startups.
- Non-compliance with certain AI practices can result in fines of up to 35 million EUR or 7% of a company’s annual turnover.
- Other violations can result in fines of up to 15 million EUR or 3% of a company’s annual turnover.
- Providing incorrect or misleading information can result in fines of up to 7.5 million EUR or 1% of a company’s annual turnover.
SMEs will receive lower fines. The severity of the fine will depend on various factors, including the nature of the violation, the size of the company, and any previous violations. Member states must report annually to the Commission on the fines they have issued.
It seems more technology means more regulations, but it is in our favour. Cybersecurity constitutes a critical component of organisational risk management, with cyber insurance serving as an additional safeguard as the final layer of protection.
In the article above we have provided a short summary about what comes next, but we strongly recommend companies always check the regulations.
