Portfolio #other
Read time: 05'42''
25 August 2023
The great paradigm shift: How European businesses can get AI right

The great paradigm shift: How European businesses can get AI right

One of the biggest questions CEOs are asking this year is how they can use AI in their business. The technology has the potential to transform a company’s operations, from improving the customer experience to automating backend tasks. But it also comes with risks, not least around bias and the responsible use of data. In a panel discussion hosted by Olivier Martret, Partner at Serena explored the role AI is playing in today's world.

Hosted by Olivier Martret, Partner at Serena, we brought together four sector experts who are working with AI in different ways to discuss the challenges and opportunities of the technology:

  • Marina Bojarski is an intellectual property lawyer at professional services firm Eviden. She’s also writing a thesis on AI and the law, particularly how intellectual property can protect AI.
  • Agata C. Hidalgo is European affairs manager at France Digitale, Europe’s biggest trade association for startups and investors, where she’s responsible for representing the views of her members with decision makers in the European Parliament.
  • Edouard d’Archimbaud is co-founder and CTO of Kili Technology, a data labeling startup that helps some of France’s biggest companies prepare the data they use to train AI models.
  • Charles Gorintin is co-founder and CTO of health insurtech startup Alan which works with more than 20,000 companies in France, Spain, and Belgium. He’s also a non-operational co-founder of Mistral AI, a European competitor to OpenAI.

Let’s start with the opportunities. How can AI benefit businesses and society?

[Charles] We’re already saving millions of euros at Alan thanks to AI. For example, when someone sees an osteopath, we have to transcribe the information in their invoice in order to reimburse them. We used to do this manually, but with generative AI and GPT-like models it happens instantly and much more cheaply because we don’t have to hire anyone to do the transcription. 

[Agata] I’m excited for AI to go even further and tackle complex tasks. Think about lawyers who need to draft standardized contracts. With AI, they’ll be able to generate something that needs minimal human edits. Another area is data analysis. There are startups like Metroscope that help energy companies track and optimize their performance. AI will allow them to generate hyper-readable advice automatically without anyone having to spend hours trawling through huge databases. The third area I’m interested in is predictions. For example, in the health world Owkin is using AI to imagine new drugs by digging into data from clinical studies.

[Edouard] Yes, in the medical field, AI is also used to automate gestures that are complicated for humans. For example, analyzing microscope images and cutting out cells to the nearest pixel is an extremely long and difficult task for humans. With AI, you can just click on the cell to mark it.

In terms of our business, AI is at the heart of how we work at Kili. Our developers use GitHub Copilot, which offers autocomplete-style suggestions as they code. We also generate a lot of our marketing content with AI.

I think AI is going to have the same impact as deep learning had in the machine learning world in 2012. In 2010, the error rate for computer vision models – software trained to detect objects in images – was 30%. In 2015, it was close to 3%. So the impact has been huge and we think generative AI is going to have the same impact in a lot of areas.

How do you ease the concerns of those who fear AI and think it will steal their job?

[Marina] We’ve run training sessions at Eviden to show what generative AI is, what it can do, and the risks it poses. A big concern we see is confidentiality. Because all the data centers are in the US, anything you put in a prompt can potentially be seen by others. And you also have to think about GDPR given the data is being transferred abroad. So we’re trying to tell our people that generative AI is an incredible tool, but only if you know how to use it.

[Charles] It really is extremely important to train everyone. As well as internal teams, we should also be educating the general population about the benefits of these tools as well as the mistakes that can be made with them.

We’re at a technological turning point, but not an epistemological one. What I mean by that is we can think about this moment in the same way we thought about previous breakthroughs. For example, when electricity was discovered, we successfully managed to educate everyone not to put their fingers in a socket.

We have to make sure that people try AI tools. At Alan, we built a course called GPT Academy, where people can learn how to use them. We encourage everyone to try AI tools, especially those who are skeptical. I want there to be skeptics, but I want them to know what they’re talking about.

What are the current limitations of AI?

[Edouard] There are challenges at every stage of training a generative model. The first is cleaning the data, such as removing any violent or pornographic content. That’s extremely expensive and currently it’s done in quite a crude, programmatic way. Then you have to remove personal data, which isn’t easy either. The third step is removing bias to make sure the dataset is balanced. For example, if you ask it to suggest jobs for a woman it doesn’t just suggest “housewife” or “nurse”.

[Charles] Training data is the web. It’s a reflection of humanity with all its flaws. Before we filter out the things we don’t like, we have to define “we” and “don’t like”. Even if we manage that, the cost of filtering everything on the internet would be enormous. Bias is also a question of philosophy and ethics. Which jobs would we want AI to suggest for a woman? I don’t even know if we can answer that as a society, so trying to program machines to do it is difficult.

[Marina] There are also big gray areas around IP. For example, if you use Stable Diffusion to generate an image of a woman at the beach, it needs to collect lots of similar images from the internet. Those images could be protected by copyright or even by a registered trademark.

We don’t yet know if training an AI model counts as making a reproduction for copyright purposes because there haven’t been any legal decisions on it yet. But there are several ongoing lawsuits in the US against Stable Diffusion and GitHub Copilot, because code can also be protected by copyright.

[Agata] We asked our members how they thought copyright applied to AI and it triggered a massive debate. One point they made was that if you have to pay a fee or ask permission before using each piece of training data, the whole process will be very slow and expensive.

Our view is that we shouldn’t regulate something that we don’t yet understand. Let’s wait and see how it goes. It’s too early to put a strict framework around this.

On the topic of regulation, the EU is working on its AI Act. What’s the state of play there?

[Agata] European decision makers want a draft text finalised by December. Next year we have EU elections, and after that it’ll take two years to pass in the French courts, so one of our fears is whether the text will still be relevant by then. That’s why we think that it’s better to pass it quickly.

In terms of the content, the Act aims to place different obligations depending on the risks that AI poses. For example, deep fakes and creative uses are low risk, while AI that manages the operation of a nuclear power plant would be high risk.

There’s a device in the Act called a regulatory sandbox, which would allow startups to continue to grow without meeting all the obligations in the text at least until going to market. We’d like to see that make it into the final version.

Of course, there’s room for improvement – we’re campaigning to prevent regulation from going too far – but the good news is that, thanks to the work we’ve done with France IA and networks in other European countries, startups are well represented in the text and are front of mind for decision makers.