19/06/2025
By Mark Foster, International Policy Advisor
The EU is proud of its reputation as a global standard setter, exporting its regulatory rulebook internationally, a process often referred to as the ‘Brussels effect’. Coupled with this, one of the greatest benefits EU legislation brings to businesses and citizens is regulatory stability and clarity. The current debacle around the implementation of the AI Act risks undermining both achievements. Here are a few lessons the EU should learn from this process if it truly wants to provide global leadership in the crafting of trustworthy, ethical AI standards:
1) You can’t ‘stop the clock’ (at the last minute)
After several years of arduous political negotiations, the AI Act entered into force in 2024. Some elements (prohibitions such as social scoring or emotion recognition in workplaces) started to apply already in early 2025. However, a significant part of the package – provisions on General Purpose AI (GPAI) and generative AI providers in particular – is due to apply as of 2 August 2025. Recently – allegedly following interventions from the US government and AI industry heavy-hitters – the Polish Presidency of the Council debated the idea of a ‘stop the clock’ process, to delay application of the provisions due to apply as of 2 August 2025. This would require a legislative ‘quick fix’ proposal being fast-tracked through the co-legislators in less than two months (before the European Parliament summer recess). It looks unlikely this will materialise, primarily given the suggestion came much too close to the application date. Although there was merit in its consideration, the mere suggestion of a delay so late in the process has caused confusion amongst industry stakeholders as they prepare to comply, even if the technical rulebook remains incomplete. Talk of delay at this late stage has undermined the EU’s credibility when it comes to defending its legislation and legislative processes, bringing into question the lauded regulatory stability, so crucial at a time of global tensions and political uncertainties. If the EU wanted to ‘stop the clock’, a political decision should have been taken much earlier in the process (end 2024 or early 2025) to communicate the decision and clarify the intention for market operators.
2) You must assume political control and responsibility for technical processes
Given the very sensitive nature of some of the issues the AI Act raises – particularly around transparency and copyright, trade secrets, and IP – it is understandable that the European Commission sought to depoliticise the process, appointing independent academics to lead the drafting of technical rules mandated under the AI Act and known as the ‘Code of Practice’ (or ‘CoP’). The European Commission (and the Chairs & Vice-Chairs) of the CoP should be commended for the seriousness with which they undertook this process, seeking to create an open and engaging framework of obtaining feedback with a broad range of stakeholders to flesh out the rulebook and facilitate compliance for GPAI providers with the requirements of the legislation. The major obstacle came from a lack of political cover for the Chairs during the drafting process. What initially was an independent, technical process became politicised, largely due to the Commission’s own U-turn, moving away from running this as a technical process. Combined with an overly ambitious timetable embedded in the legislation – European Parliament and Council, take note of the importance of sufficient time to complete technical rules to facilitate compliance, especially in fast evolving areas like AI – this delegation of responsibility for politically significant aspects has ultimately proved counterproductive. Or to put it another way, whilst the technical can rarely be disconnected from the political, moving the goalposts and unnecessarily blurring the lines during the process has made matters worse.
3) Be consistent in technical rulemaking, respecting the primary legislation
Throughout the CoP process, the AI office and the Chairs of the CoP have been at pains to seek to ‘balance’ the views of a diverse range of stakeholders, often with diametrically opposite views. This has manifested itself most starkly in relation to copyright. Rightsholders representatives have screamed blue murder about copyright breaches by AI firms’ acquisition of training data, whilst GPAI model providers vehemently contest this, reiterating that their web crawlers respect express rights reservation provisions and text and data mining exemptions included in EU copyright legislation. The bigger issue here, however, is how the Commission has cornered itself in a debate which is, at most, tangential to the AI Act itself. Whilst the level 1 text does mandate the CoP to develop technical rules on copyright and transparency related issues, the scope of these provisions fall entirely on AI firms, not on rightsholders. As such, the Commission erred in its strategy of balancing these two distinct positions. The Commission should take political responsibility and stick to the co-legislators’ original aim with the CoP – facilitate compliance with the AI Act, nothing more. Any calls for modification to the EU copyright directive, whilst perhaps legitimate and worthy of consideration, should be dealt with separately from the AI Act’s technical processes.
In conclusion, regulatory clarity, consistent political support for technical processes and a strict adherence to primary legislation are essential elements for successful EU policymaking. Without these ingredients, the EU will struggle to maintain the ‘Brussels effect’, exporting its values and principles. This is admittedly an immense challenge, when it comes to regulating fast-moving, transformative technologies. This is also the reason why a more flexible but also futureproof rulebook should be the ultimate goal – to compete internationally in the global race and to harness the benefits (whilst mitigating the risks) of technological innovations.
Legislation and policy around innovation and technology is not just about technology for technology’s sake – putting rules on digital gatekeepers or nascent, innovative tech players. It’s about a fundamental rewiring of the EU’s entire economy and society in an ever-changing world. Technology will affect horizontal issues such as labour practices, R&D, skills and training as well as all industrial sectors – from defence, through energy to manufacturing, services and creative industries. The EU will need to adapt its processes and its frameworks to adapt, lest it fall behind other jurisdictions who adapt more quickly. The AI Act process is a salutary example from which the European Commission must learn the lessons if it is to have any hope of competing internationally and defending the EU’s social market system.