The EU AI Act will reshape AI innovation in Europe and beyond. Safety, transparency, and trust now dictate progress, with immense potential for those who stay ahead in these topics. Within the upcoming 24 months, the use of closed-source AI - like OpenAI and chatgpt - will be limited for high-risk applications, while we see an immense rise of open-source models, tailored for specific needs, and software providing the required guardrails for AI.
About Maarten
Maarten is the founder of Deeploy (EUVC has backed Curiosity VC which has in its turn invested in Deeploy). As AI implementation accelerates, organizations tend to quickly lose control. Moreover, with the soon-enforced EU AI Act, organizations need to ensure their AI innovation is safe and transparent. Deeploy is the leading platform to ensure the safe, transparent, and compliant use of AI, providing organizations with high manageability and oversight over all AI use cases.
TL;DR
The EU AI Act, ratified on March 13, 2024, positions the EU as a leader in global AI regulation, focusing on the balance between economic growth and the technology's ethical implications.
It introduces a risk-based regulatory framework that mandates strict compliance for high-risk AI applications in critical sectors, aiming to ensure transparency, safety, and human oversight.
It also emphasizes transparency as a core principle, requiring AI systems to be traceable, explainable, and clear about interacting with AI, limiting the use of closed-source models like ChatGPT in high-risk areas.
The Act will pave the way for startups to innovate with open-source models, driving sector-specific applications and sustainable innovation within the European market.
On the 13th of March, 2024, the European Parliament voted in favor of the EU AI Act. AI has seen explosive growth - driving huge profits and quickly changing the way people work, live, and communicate with each other. Start-ups around AI grew rapidly to unicorn status, while countries’ whole economies have begun to feel major shits. Recent research by Martin H Thelle (Implement.dk) showed that about two-thirds of the European workforce will drastically see their jobs changing in the upcoming 7 years, with almost 10% losing their jobs completely. However, GDP can grow an additional 8% by the use of AI, showing promise in counterbalancing any initial downsides.
On the other hand, AI poses enormous risks, especially in regulated industries like healthcare and banking. The rise of AI stokes fears about transparency, bias, and privacy, as well as fundamental discussions about the distribution of wealth and income. The EU AI Act is the product of societal pressure and discussions about fair use of AI and it will fundamentally change its trajectory.
The EU is setting global standards
The EU is not the first to come up with AI legislation. China already has introduced a patchwork of AI laws. In October 2023, US President Joe Biden announced an executive order requiring AI developers to share data with the government. But the EU has now gone further.
"The EU AI Act is the world's first and only set of binding requirements to mitigate AI risks," Enza Iannopollo, principal analyst at Forrester added. She said it would make the EU the "de facto" global standard for trustworthy AI, leaving every other regions to "play catch-up."
How does the AI Act work?
The AI Act follows a so-called risk-based approach that classifies AI products according to risk and adjusts scrutiny accordingly. The main idea of the law is to regulate AI based on its capacity to cause harm to society. The higher the risk, the stricter the rules. AI applications that pose a "clear risk to fundamental rights" will be banned completely.
AI systems considered "high-risk", such as those used in critical infrastructure, education, healthcare, law enforcement, border management, or elections, will have to comply with strict requirements, resolving around risk assessments, transparency, record keeping, robustness, safety, and effective human oversight.
The AI Act follows a risk-based approach, with different requirements for different levels of risk
Transparency is key for high-risk AI systems
One of the main pillars of the EU AI regulations revolves around transparency. In the EU AI Act, transparency is defined as follows:
Transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.
OpenAI. Anything but open
Large language models can do jaw-dropping things. But nobody knows exactly why. We can ask Chatgpt to generate an explanation for us, but that can hardly be seen as a true explanation of why the model produces a certain output. And that's a problem, because without understanding AI, we can’t estimate or mitigate the risks, and are unable to steer AI whenever it makes mistakes. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step toward controlling more powerful future models.
Moreover, Chatgpt and some other providers of Large Language Models are closed source, and hence only a handful have access to the underlying source code that could help explain how a model makes a prediction. Open.ai, the company behind ChatGPT, contrary to what the name suggests, is fully closed source. This means that it’s not compliant with most of the provisions of the AI Act for high-risk use cases, and hence can’t and won’t be used in high-risk use cases under the AI Act.
Huge opportunities for responsible AI start-ups
This creates a huge opportunity for start-ups, to build on top of open source models or create tooling to explain and comply, providing end users the transparency they rightfully deserve under the EU AI Act. This means we’ll see a rise of domain-specific AI start-ups, addressing the specific needs in for example healthcare, being open, transparent, and compliant for the given use case.
The Mozilla Foundation, one of the largest advocates for responsible AI, shared valuable research in early 2024, with their perspectives on responsible and trustworthy AI. Bottomline: prioritizing responsibility and transparency is the only way forward to survive in the European Market, which is still one of the biggest worldwide together with China and the US. While regulation often slows down innovation in the short-term, it often leads to more sustainable innovation in the long-term, with a better balance between economic upside and risk mitigation for those that don’t profit directly from it.
👋 Upcoming in-person events we’re hosting
There’s nothing we like better than getting Europe’s best and brightest together with good food, drinks, and conversations that go truly deep.
Fund Modelling Workshop & Mixer | 5th of June | 🌍 Berlin, Germany | Join waitlist.
European VC Awards | 4th of June | 🌍 Berlin, Germany | Get tickets.
📅 Upcoming virtual events
From time to time, a podcast is just not enough. Check out our roundtables and live events below.
State of European VC Fundraising | April 10th, 2-3:30 PM | Register here.
Speakers: Ekaterina Almasque, Founding GP at OpenOcean | Joe Schorge. Founding GP at Isomer Capital | Daniel Keiper-Knorr, Founding GP at Speedinvest.
🏆 Firesides with the winners of the European VC Awards
Fireside with the Newcomer of the Year Winner | 13/6, 12-1:30 PM | Register here. Hundreds of new VC funds come to market every year. But only ONE will win Newcomer of The Year. This is your chance to meet the winner firsthand.
Fireside chat with the Winner of the Hall of Fame | 25/6, 12-1 PM | Register here.
Hear firsthand from a true giant upon whose shoulders the European tech ecosystem stands tall.
🗓️ The VC Conferences You Can’t Miss
There are some events that just have to be on the calendar. Here’s our list, hit us up if you’re going, we’d love to meet!
0100 Conference Europe | 📆 16 - 18 April | 🌍 Amsterdam, Netherlands
TechChill Riga 2024 | 📆 18 - 19 April | Riga, Latvia
Iceland Innovation Week | 📆 15 - 16 May | Reykjavik, Iceland
EBAN Congress Tallinn 2024 | 📆 20-22 May | 🌍 Tallinn, Estonia
SuperVenture | 📆 4 - 6 June | 🌍 Berlin, Germany
Nordic LP Forum & TechBBQ | 📆 September | 🌍 Copenhagen, Denmark
North Star & GITEX Global | 📆 14 - 18 Oct | 🌍 Dubai, UAE
GITEX Europe 2025 | 📆 23 - 25 May 2025 | 🌍 Berlin, Germany
Trying to get in front of European VCs and LPs? The 2024 EUVC Media Kit is out - check it out here, and let’s talk. 💌