Skip to content

Regulating AI

The EU aims to introduce a common regulatory framework for AI. Is this bad news for startups?

13 min read
Regulating AI
A modern European Union office setting with a round table discussion
🦉
Before Growth is a weekly newsletter about startups and their builders before product–market fit, by 3x founder and programmer Kamil Nicieja.

This week, I had a conversation with my friend Maciej Mańturz about the upcoming European Union legislation aimed at regulating AI. We discussed how the new law will impact startups looking to incorporate AI, and what entrepreneurs need to know to stay ahead.

Maciej is a lawyer and a specialist in privacy. The common branches of law never truly resonated with him, and he’s never envisioned himself in a courtroom setting. Eventually, he joined a major corporation, which opened his eyes to the intersection of technology and the business world. His view became that a lawyer should be not an obstacle but a facilitator of business initiatives.

Since then, he’s pursued further education and earned certifications in privacy and broader tech law, covering areas like contracts, intellectual property, a touch of cybersecurity, and even delved into AI, culminating in a postgraduate thesis on the EU’s proposed AI framework—which I read preparing for this discussion.

Kamil Nicieja: We studied together, so you know I left law school behind. I’ve often reflected on what initially attracted me to law. Once I shifted from law to coding and then to business—which isn’t that different from law—it became clear to me: lawyers are like coders, but they deal with incredibly complex “syntax” and “run” their code in a slow and unpredictable system: the courts.

Now, with the emergence of advanced language models, coding feels more and more like drafting laws. You type in a command, and really, it’s anyone’s guess what the outcome will be. Do you see the parallels?

Maciej Mańturz: It’s intriguing to see you make those connections, given that they’re both apparent and commonly acknowledged. This year, I went to a series of talks with lawyers well-versed in tech and legal intersections, covering fields like Intellectual Property, Cybersecurity, and AI. One specialist had launched a postgrad course: The course teaches tech-centric law and coding abilities, like understanding how an app works, some programming skills, and the software development lifecycle. It’s said to draw inspiration from Western European models and others worldwide.

I read an article which proposed that lawyers, aside from math-heavy tasks, are naturally fit for coding classes, too. The logical nature of both professions likely supports this view. It’s also becoming evident that modern lawyers should stay updated with tech trends, highlighting the value of cross-disciplinary expertise. I only wish such a mindset had been mainstream when we started our studies.

We’re meeting at a significant time. It’s been five years since the EU introduced GDPR. Now, they’re working on another major tech regulation named the AIA, the AI Act. For those not closely following European legislation, could you explain the main goals of this new bill? Do you know when it might be implemented?

The final EU Artificial Intelligence Act is expected to be adopted near the end of 2023. It’s clear that AI is no longer just a theoretical idea. Nowadays, you can’t browse social media without coming across news about AI, whether it’s a new advancement or a tool set to change our lives.

There are genuine concerns about the potential negative effects of this technology on the average person. Deepfakes are a prime example of potential misuse. Training AI models, especially using deep learning methods, requires vast amounts of data. Personal information can be as valuable as gold, which is why many social media platforms can seem invasive to our privacy.

There’s a tug-of-war between laws protecting our privacy and businesses looking to profit while offering us these innovative services. There’s a trade-off with our rights. Too strict regulations might drive businesses away, but too lax could jeopardize our rights and security.

Just like with GDPR, this new regulation isn’t just for companies based in the EU—if you’ve got users in Europe, you’ve got to comply. It’s pretty clear the target is largely on U.S. and Chinese tech giants. I’ve come across some in tech circles saying the EU is essentially in a cold war with other major powers, using regulation as their weapon of choice because they can’t go toe-to-toe on tech innovation. Can you break down why, when the EU rolls out a big tech regulation, it becomes a global must-follow? What’s stopping U.S. and Chinese companies from just giving it the cold shoulder?

GDPR surprisingly set a global precedent but, you know, it kinda worked, right? Many new privacy regulations are modeled after it, even if it’s a challenge for businesses. In the realm of data privacy, there’s ongoing tension about transferring personal data to the US. In essence, every few years, NGOs, led by Max Schrems, prompt the ECJ to declare the US framework incompatible with GDPR. Then, governments negotiate until the EU bodies approve a decision. This back-and-forth often revolves around US laws related to accessing data for national security reasons.

The same is seen with companies like Meta, who continue to operate despite GDPR-related fines. Perhaps the EU’s global influence and potential profits are compelling reasons to keep pushing boundaries. It seems the EU is willing to compromise on GDPR to ensure data transfers, which is also a strategic decision in the global tech race. From a company’s standpoint, it’s simpler to comply with these standards if they aim to operate within the EU. And the EU is a significant market so nobody can just drop it. But then again, I'm no business guru.

The bill sorts AI systems into two major camps: those that pose a “high risk” and everything else. So what does the EU mean by significant risk when it comes to AI? Does this mean you can basically snooze on this bill if you're developing just another text-summarization app, but you’d better pay attention if your AI could potentially harm people or be used to discriminate against them on a large scale? Like in HR or finance?

The AI Act adopts a risk-based approach, considering various points in the product chain, from creators to deployers. While it’s up to you to categorize your product’s risk level, disagreements with regulators could result in hefty penalties. Given these potential consequences, as a lawyer I would advise not to dismiss the AIA’s requirements.

Even products deemed lower risk must adhere to the AI Act’s foundational principles, like explainability, privacy, and security. As you mentioned, there are certain activities classified as “high risk” or outright prohibited in the Act. These should be immediate red flags for any company. Businesses should consult legal experts to ensure they aren’t inadvertently falling into these high-risk categories.

Let’s talk about some tangible scenarios. Suppose I aim to create a foundational model on par with GPT-4. For instance, consider Europe’s prominent AI enterprise, Mistral, and its new LLM. How does the AIA assess the risk associated with such a venture?

When evaluating the requirements of such a system, a layered approach is essential. The initial step involves determining its placement within the risk classification spectrum. The “high risk” category relates to systems that could significantly jeopardize a person’s health, safety, or fundamental rights.

Perhaps that’s not a bad choice given that researchers found that Mistral’s model can provide information on topics like bomb construction, suggesting a potential shortfall in their safety measures.

Yup. And similar to GDPR, the penalties under the AIA are significant. I think they can reach up to 40 million euros or 7% of global annual revenue. This makes the cost of non-compliance even steeper than in the realm of personal data.

OK, next example: a B2B product that uses AI to monitor daily activities of employees across platforms like Slack, Outlook, Teams, Jira, and compiles a daily company-wide summary. Would this be classified as high-risk? What are the reasons for or against this classification?

Certainly, this situation could be viewed as high risk since employment is explicitly labeled as such in the AIA annexes. Surveillance also poses additional challenges from a GDPR standpoint.

Wow, color me surprised. I personally didn’t think this would be a huge deal. Let’s consider one final scenario: a chat application where 500 million users can converse with virtual representations of celebrities like, say, LeBron James about basketball. Would this be deemed high-risk or low-risk?

I’d argue that this sounds like a low-risk situation. Generally, chatbots don’t fall under the high-risk category. The intent behind these regulations is to prevent misuse or exploitation in areas of public interest, such as welfare, employment, and safety.

However, it’s important to note that creating a virtual likeness of someone must respect intellectual property rights. It’s also now widely understood that users should be informed when they’re interacting with a machine, not a human. The AIA would actually qualify that as a deepfake. Additionally, there will be another EU legal act addressing civil liability for damages, which should also be kept in mind.

What guidance would you offer to a standard AI startup in Europe? Given that such companies are often small, their financial landscape can be challenging. They might have secured some funding, but it’s equally likely they haven’t. On top of engineering expenses, they’re also faced with the potential costs of legal counsel. Is it wiser for them to address legal matters upfront and, if so, how can they do so affordably? Or should they prioritize gaining traction, securing external investment, and then allocating funds for legal guidance?

As someone specialized in privacy, I’d stress that any solution aiming for a European launch should adhere to the privacy-by-design principle, especially if it involves personal data. The financial repercussions for not complying with EU regulations are steep and can be quite daunting.

Currently, it seems plausible that only major entities could significantly impact the AI sector due to these regulatory hurdles. It’s uncertain whether they’d even choose the EU as a base for AI development given these challenges. If startups are to thrive in this environment—and I hope they can—it’s wise to seek at least basic legal counsel early on. While a full legal team might not be necessary initially, gaining a foundational understanding of expected requirements is crucial. A good starting point might be recommendations from regulatory bodies like the UK’s ICO.

Gotcha. I personally believe VC investors can play a significant role here by providing startups in their portfolio with complimentary legal consultations as a value-add. It’s far more efficient for a VC fund to employ lawyers who can assist several startups simultaneously rather than each startup seeking individual legal counsel. Some VCs did that with GDPR.

Oh, and while we’re at that… If I’m already compliant with GDPR, does that mean I'm in the clear with AIA as well?

No, not really.


Related posts

Intelligence as an API

AI models are getting hard to beat when it comes to getting simple answers right at scale.

Intelligence as an API

Pairing with AI

Raise your floor with large language models.

Pairing with AI

The Economics of LLMs

How to avoid bankruptcy scaling up large language models

The Economics of LLMs