BCG: The EU AI Act is a ‘wake up call’ for leaders to assess AI readiness. Here’s what HR needs to know.
After a six month transition phase, the EU AI Act will be enforced across Europe. Attending an exclusive BCG roundtable, UNLEASH explores what HR leaders need to have top of mind to be compliant.
As of Thursday 6th February 2025, the long-awaited EU AI Act will come into play.
To find out what this will entail for EU businesses, UNLEASH attended an invite-only roundtable held by global consulting firm, Boston Consulting Group.
Businesses that do not comply to the legislation, will face a fine of 4-7% of annual global turnover. Read what you need to know below.
Share
In less than a week, the EU AI Act will become a binding legal requirement for businesses, which aims to impose harmonized regulatory regimes on AI systems within the EU.
But according the BCG data, 72% of executives say their organizations are not prepared for AI regulation, as they are facing five key challenges: Evolving regulatory landscape, lack of clear standards, equitable access to infrastructure, and the pace of innovation.
However, BCG’s AI risk mitigation framework aims to help companies understand and utilize Gen AI’s full potential, while also effectively managing risks.
To find out more about the impact these regulations will have, UNLEASH attended BCG’s exclusive roundtable.
What is the EU AI Act?
Opening the discussion, Kirsten Rulf, Partner and Associate Director at BCG announced: “This is really a pivotal moment for AI and for AI development and regulation.”
She then continued to explain that the Act itself is a consumer protection law that segregates AI use cases into four different risk classifications: Unacceptable risk, high risk, limited risk, and minimal-risk for Gen AI systems.
Businesses that do not comply to the legislation, will face a fine of 4-7% of annual global turnover.
“The penalties for breaching the AI Act were designed to be steep and could be applied to a wide range of businesses beyond just the tech sector using AI,” Rulf commented.
However, this important deadline is also a wake-up call for business leaders to assess their firm’s AI readiness. The time for experimentation is over.
“The EU AI Act demands that companies must define their risk appetite, understand existing use cases, and align AI implementation with a clear strategic vision to meet the Act’s requirements.”
Rulf continued to explain that the deadline on February 2nd, “marks the end of a period and the start of a new period in AI scaling.”
This will therefore require businesses to be “much more professional” about the AI initiatives, as big developments for businesses will be the prohibition of providing or putting certain AI systems on the market.
Those that deploy subliminal techniques or are used for social scoring will be particularly targeted, for example, banks and other financial institutions using AI.
The Act is also aimed to promote trust and innovation, while safeguarding against defined prohibited practices and high-risk AI systems, such as AI influencing behavior via subliminal or manipulative techniques that can cause significant harm, or AI ‘social scoring’ that causes unjust or disproportionate harm.
What does the future look like for European businesses?
Business leaders need to be aware that the deadline not only marks when new law will come into play, but also when most employees within a business will now need to be trained on AI to increase their AI literacy.
The Act will therefore enable businesses to move away from certain use cases that are no longer allowed in Europe, and towards enabling the whole organization to navigate AI correctly.
Kirsten Rulf added: “From 2nd February, the rules on prohibited AI systems under the AI Act come into force. This date isn’t just about compliance with prohibited use cases – it’s a deadline for AI literacy too.
Business leaders must ensure their workforce is AI-literate at a functional level and equipped with preliminary AI training to foster an AI-driven culture.”
“Initially, the enforcement of these provisions is limited to private enforcement, i.e., litigation. The powers of national or member state supervisory authorities to enforce these provisions will come later, from 2nd August 2025 – one year after the entry into force of the legislation.”
Beyond this, the next significant milestone for businesses will be at the end of April 2025, as this is when the final Code of Practice for General Purpose AI models is expected to be published by the European Commission.
Rulf therefore advised businesses should use this time to gain sufficient information from AI model providers to deploy AI responsibly, while working collaboratively with providers, policymakers, and regulators to ensure pragmatic implementation.
By the close of 2025, businesses should implement standardization processes with measurable KPIs. To do so, they should be actively involving their workforce in understanding and shaping these processes – a demand of the EU AI Act.
Concluding, Rulf reiterated not only the importance of the new Act, but the positive outcomes it will bring, too. “I think a lot of you will have heard the narrative that European regulation is stifling AI scaling and AI experimentation. That is not the narrative that we see in the market,” she surmised.
“What we see is that the EU AI Act and other such regulations bring the guardrails, quality, and risk and governance framework into place that it actually needs to scale.”
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!