In this two-part series, we will take a closer look at each of these new legislations, summarising the necessary actions and emphasising the key points for each piece of new legislation.
Welcome to Part Two of our series on EU Cybersecurity Legislation. In the first part, we explored the Digital Operational Resilience Act (DORA) and the NIS2 Directive, focusing on their implications and the necessary steps for compliance. Now, we turn our attention to two other critical pieces of legislation: the Cyber Resilience Act (CRA) and the Artificial Intelligence Act (AI Act).
As digital threats continue to evolve, these new regulations aim to enhance the security and resilience of digital products and AI systems within the EU. The CRA sets out essential security requirements for products with digital elements, ensuring they are secure-by-default and capable of withstanding cyber threats. Meanwhile, the AI Act introduces a comprehensive framework to ensure AI systems are safe, transparent, and non-discriminatory, categorizing them based on their risk levels and imposing stringent requirements on high-risk systems.
In this part, we will delve into the specifics of these acts, highlighting their key provisions, compliance requirements, and the potential penalties for non-compliance. By understanding these regulations, organizations can better prepare to navigate the complex landscape of cybersecurity and AI governance, ultimately safeguarding their operations and maintaining trust with their stakeholders.
CRA (Cyber Resilience Act)
The Cyber Resilience Act, which came into force on 10 December 2024, aims to increase the cybersecurity of products or software with a digital component, such as mobile and IoT devices, network infrastructure products, Industry 4.0 applications, software products, smart home goods, and personal wearable devices. Most provisions will become applicable 36 months later, whilst reporting requirements will take effect 21 months after the entry into force.
This legislation offers clearer guidance for manufacturers, importers, distributors, and users regarding cybersecurity requirements, especially with regards to what is crucial for global markets.
CRA classifies products with digital elements (PDE) as critical products – those with significant cybersecurity-related functionality and pose a high risk if compromised – or as important products -those with critical cybersecurity functions or which have a high potential for adverse effects if compromised.
Essential security requirements
The Cyber Resilience Act (CRA) specifies essential security requirements for products with digital elements to enhance their cybersecurity. Manufacturers, importers, and distributors must ensure these products are secure-by-default, have appropriate control mechanisms like authentication and identity management systems to prevent unauthorised access, and include security updates and technical documentation. Additionally, the act mandates data portability, allowing users to securely remove and transfer data to other products or systems. Importantly, actively exploited vulnerabilities and severe incidents must be reported within 24 hours.
However, manufacturers, importers, and distributors have different and very distinct responsibilities under the Cyber Resilience Act:
- Manufacturers are accountable for the design, development, production, and marketing of products with digital elements bearing their name or trademark. They must report any actively exploited vulnerabilities and severe incidents within 24 hours.
- Importers must ensure products from outside the EU meet the CRA’s requirements before being placed on the EU market.
- Distributors, on the other hand, are responsible for making these products available on the EU market without altering their properties, ensuring compliance with the CRA. This comprehensive approach aims to bolster cybersecurity and protect consumers and businesses alike.
- CRA penalties for non-compliance Similar to DORA and NIS2 we also have CRA penalties for those who don’t comply. Compliance with these requirements is crucial to avoid penalties, which can be as high as €15 million or 2.5% of the total worldwide revenue. Non-compliant products may be withdrawn from the market or recalled..
AI Act (Artificial Intelligence Act)
The first regulatory framework for the AI Act was proposed by the European Commission in April 2021 to ensure AI used in the EU is safe, transparent, and non-discriminatory. As of 2 February 2025, prohibitions on certain AI systems and requirements on AI started to apply. It affects all providers and operators of AI systems in the EU and those providing AI services within the EU. The financial penalties for non-compliance can reach up to €35 million or 7% of total worldwide turnover so it’s essential to understand the requirements and implement them accordingly.
Risk levels and rules
AI systems are categorised into different risk levels under the AI Act, with each level having its own set of rules:
- Systems deemed to pose an unacceptable risk, such as those involving cognitive behavioural manipulation, social scoring, and biometric identification and categorisation of individuals, are banned outright.
- High-risk AI systems, which can negatively impact safety or fundamental rights, must adhere to stringent requirements and obligations including: Risk management throughout the AI system’s lifecycle, Data governance for training, validation, and testing datasets, Technical documentation, Instructions for use, Human oversight, Accuracy and cybersecurity, Quality management system to ensure compliance. Individuals can lodge complaints about these systems with designated national authorities.
- Limited-risk AI systems must comply with transparency obligations to ensure end-users are aware they are interacting with AI.
- Minimal-risk AI systems, such as AI-enabled video games, remain unregulated.
- General purpose AI models (GPAI) models must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers need only comply with copyright and publish the training data summary unless they present a systemic risk.
Ensuring high standards in AI cybersecurity
In conclusion, the growing emphasis on technical documentation, human oversight, and stringent penalties for non-compliance underscores the critical importance of maintaining high standards in AI cybersecurity. By adhering to these new regulations, organisations can ensure they are not only compliant but also leaders in safeguarding their systems and data.
In conclusion
As we wrap up this series on recent cyber legislation, it is clear the landscape of cybersecurity is rapidly evolving. The new measures being put in place are designed to foster a more secure and transparent digital ecosystem. It is essential for businesses to stay informed and proactive in their approach to cybersecurity to mitigate potential risks and disruptions.
To find out more about any of the recent cyber legislation and its impact on your organisation, we invite you to speak to one of our experts. They can provide you with the insights and guidance needed to navigate these changes effectively and ensure your organisation is fully prepared.
