Skip to Content

Catelijne Muller on AI and the Ethical Framework

Content is also available on this page exclusively for members Log in to get access to this content or request account.

"Responsible AI must be put into practice"

She is now a familiar face in Brussels. Catelijne Muller writes international advice on the use of Artificial Intelligence (AI). It is therefore not surprising that the Dutch Association of Insurers involved her in the design of the Ethical Framework that will apply from 2021. "That Dutch framework is seen as a success story in Europe."

The interview is forced to take place by telephone. Muller is busy. It is fitting and measuring to meet all agreements. "AI remains a hot item," she says with a laugh.
In the autumn of 2019 we spoke to each other for the first time. Muller, as a member of the EU High Level Expert Group for AI, was then involved in setting up the Ethical Framework for Data-Driven Decision-Making that the Alliance introduced in the summer of 2020. The framework contains open standards and that means that the members of the Association can give substance to them themselves, but they must take into account the seven requirements that come from the Expert Group : 1. Human autonomy and control, 2. Technical robustness and security, 3. Privacy and data governance, 4. Transparency, 5. Diversity, non-discrimination and justice, 6. Environment and Social Welfare and 7. Accountability.
In addition to the Ethical Framework , Muller also discusses the European Commission's proposals for a European AI law (see box) and responds to the slogan 'America innovates, China copies and Europe regulates'. "America follows us by itself. There are already states that are going further than we do in some respects with their regulations."

European AI legislation is coming

Europe is brooding on an AI regulation. All companies and governments that use AI, which affects European citizens, will be covered by that regulation, but that will not be before the beginning of 2025.
The European Commission presented its proposal on 21 April 2021. Subsequently, the European Parliament and the Council (made up of representatives of the various national governments) set to work on the examination of that proposal. Both institutions are now working on a modified version. Once they have defined their position, they will start negotiations together and in the presence of the European Commission for a final compromise text. This text will eventually also be published in the official gazette of the EU. The new AI legislation is expected to come into force two years later.

In the earlier interview , you emphasized that humans must be in command when it comes to AI. Do you still think so?

"Yes, it remains of the utmost importance that man is in command . Especially because it is sometimes a bit disappointing what AI can do. There's a lot of hype around it. And it can do a lot, but AI also makes very strange mistakes that humans would never make."

Is AI overrated?

"Whether overestimating is the right word, I doubt, but there is insufficient understanding of how it works. As a result, there are often wrong expectations. Looking for patterns in data is very different from understanding what the data entails. I sometimes give the example of a picture of a cat. AI does not see a photo, but pixels. The system has learned that all those pixels are a cat, but does not know that a cat is an animal and can walk, eat, sleep, etc."

So what is AI? A system?

"I can't answer that. No one agrees on the definition. And now that Europe wants to regulate, there will have to be a definition, but the discussion has not yet been settled. A is a collection of technologies, with all kinds of manifestations and with complicated terms that change again and again. In any case, AI is not one thing, but includes multiple systems that influence our environment with a certain autonomy. It is particularly important to look at the impact of the systems."

What do you get out of it?

"A lot now. I started using AI a few years ago, after it piqued my interest. Then I wrote an advice on a European level that started flying. Ultimately, like many others, I want AI to be steered in the right direction. If we don't manage the impact properly, the good sides of AI won't stand a chance either. AI therefore mainly has to do with good risk management."

"AI mainly has to do with good risk management"

In addition to opportunities, AI also offers the necessary risks. What are the biggest ones?

"There are quite a few. I usually group them into four types. First of all, the technical risks, which prevent AI from working as it should. In addition, there are legal risks. People often think of privacy, but of course there are many more. For example, in the field of labour legislation, you can think of the question of whether an algorithm can fire someone. The third category concerns ethical risks. We have been talking about this in Europe for years and there are now also guidelines for this. What do we want with a technology that takes over many cognitive tasks? How far can that go? And the last category is formed by the social risks. If systems do not work properly, this can lead to discrimination and ultimately to social exclusion. Think of the opportunities on the labor market that are reduced by AI. Or different premiums for people from different backgrounds. "

The mission of the EU High Level Expert Group on AI has been fulfilled and your mandate is over. What is your role now?

"At the moment, I am president of ALLAI, an organisation that I founded with three other expert members from that EU group. We immediately realized that the work would not be finished after two years of writing advice and the discussion must continue. Our position is that AI should be technically, ethically, legally, socially and socially responsible. That is why we talk a lot with European policymakers, do the necessary education and we try to increase awareness. In addition, I am still ai rapporteur for the European Economic and Social Committee. I always call this advisory body the European SER."

"It is important that we define in the EU what we do and do not want"

Europe wants to be the first to come up with strict rules for AI. In an interview in the Volkskrant did you say that it will be "a party for lawyers"?

"Let me start by saying that it is good that the European Commission is proposing legislation. Europe has opted for an explicit protection of the fundamental rights of citizens. As a basis, I think that's very good. It is important that we define what we do and what we do not want in the EU, but all in all it has become an enormously complicated document. In order to be able to understand the proposals, a lot of legal knowledge is needed. It is not high-over legislation. In fact, it goes into pretty deep detail and that makes it very complicated."

The proposals range from a 'minimal' and 'limited' to a 'high' or even 'unacceptable' risk. An example of minimal risk is Spotify and unacceptable risks include cameras with facial recognition. Insurance is seen as high risk in that proposal. Rightly?

"I know that proposals are now being made to classify health and life insurance as high risk and I understand that insurers do not like that, but I think that the ban on social scoring may be more crucial. Social scoring is often thought of as Chinese practices, where the entire population receives a score for behavior, but the committee tries to draw a line here between what is an acceptable assessment, for example for determining premium or predicting insurance fraud, and what is unacceptable 'scoring' of someone's reliability. Because let's be honest, we score a bit off together. Also insurers. They have to assess risks and attach a premium to them. What are the chances that someone will suffer damage, or drive damage? And if you have to determine the premium level, what do you weigh and what not? Can the driving behaviour of a spirited driver be taken into account if it is monitored and assessed by AI? And if someone says they don't smoke, can you bring social media and search day with you to see if there is a risk that someone is a smoker? The committee tries to draw such boundaries with social scoring and that is what I would focus on as an insurer."

But is it right that insurance should be seen as high risk?

"Only AI-driven credit scoring for essential private services is currently high risk. And certain insurance policies can indeed be seen as an essential private service. Life and health insurance are now proposed as an explicit addition to the high-risk list, but that is still a proposal for the time being. And even if the entire industry gets on the high-risk list, that doesn't mean insurers shouldn't be allowed to use AI at all. They only have to meet certain requirements and frankly, I don't think that's so strange. Many insurances are essential for people, and AI can have discriminatory, unfair, or simply wrong outcomes. Especially after the Surcharges affair, I can't imagine that the insurance industry wants to exclude people unjustly."

On the other hand, you can also say that insurers already have enough on their plate with the GDPR and Solvency II? Why not pour the rules into open standards?

"With regulations, you also create a level playing field. The competitor must comply with the same rules as you and can therefore not make a mess of it. The entire sector benefits from this. Moreover, I think that many insurers have long met these requirements for high-risk AI."

"With regulations, you also create a level playing field. The whole sector benefits from this."

Jokingly, it is sometimes said that 'America innovates, China copies and Europe regulates'. Does that also apply now?

"Not really, because although Europe is taking the first step, America is soon following suit. Make no mistake, there are good contacts between European and American policymakers. There are even states in America that in some respects already go further than what Europe is now proposing. The well-known Brussels effect (everything that comes from Brussels is also reflected in the rest of the world) is already underway."

America will follow soon?

"Yes, there is a chance that the same kind of AI regulation will soon be discussed in America. And don't forget that Europe regulates AI for the whole world. If a company wants to enter the European market, it must comply with our rules. Even if it's a Chinese or American company."

The question remains whether Europe is not too good? Are we also suppressing new developments with new regulations?

"On the contrary. I think this regulation is more of a stepping stone for innovation. When companies grumble to me about regulations, I always ask: what are you doing now that will no longer be possible in the future? I never get an answer to that."

In addition to all kinds of legal requirements, the Dutch insurance sector imposes even more rules and standards on itself. An example is the Ethical Framework for Data-Driven Applications, which you helped to develop. How do you look at that framework now?

"The Ethical Framework is seen as a success story in Europe. It's nice when people say: should you take a look at what the insurance sector has done in the Netherlands? The European Commission's proposals, which are now before the European Parliament and the European Council, also lay down all kinds of requirements from the Ethics Framework . So the Dutch sector has actually already taken a run-up."

"The Dutch insurers have already taken a run-up"

A run-up or a head start?

"Maybe lead is indeed a better word. I'm not sure there are any other sectors or countries that have an Ethics Framework . The automotive sector is working on it, but especially in the field of safety. Dutch insurers are quite unique in this respect. Even if the entire sector were to be put on the high-risk list, they are already prepared for this through their own Ethical Framework."

That framework is based on seven requirements. What is the most important for you?

"They are especially important in combination. Without transparency, you can't be accountable. But human autonomy is number 1 for a reason. If I really have to choose, I put that at the top."

Insurers have started working with this Ethical Framework . For example, Athora has drawn up its own algorithm register, hired an ethicist a.s.r., Achmea has developed an Ethical Wheel in addition to an Ethics Committee and ARAG is committed to an internal mission (see also box below). What do you think of such initiatives?

"I think it's good news that insurers are working on it. But what is really important is that there are multidisciplinary teams. You will first have to discuss internally with each other why the Ethical Framework is important. Then you can see what the system needs to do and what requirements you want to meet. I now hear that one element of the process is always picked up, while all elements together are so important."

"You can't put AI in the hands of one employee"

Is it easier for large insurers to comply with the Ethical Framework than for small(er) companies?

"I'm not so sure. I don't know the sector inside and out, but I do know that with AI you need different structures. You can't put AI in the hands of one lawyer or one employee. Perhaps in that sense a small insurer is at an advantage, because it can move more easily and is less stuck in structures and processes. Such a multidisciplinary team does not have to be huge either. Above all, you have to put the right people in it, who ask the right questions and can also answer them. A data scientist looks purely at data and says: 'I can see everything in the data. I'll just take everything with me'. A lawyer says: 'hey, that's not possible. Think of the legislation'. An ethicist, in turn, shouts: 'yes, but that's not how we want to be known'. And a call center employee, who will soon be able to explain why a customer has to pay more premium, asks: 'can you also explain to me how such an algorithm works?' So you not only have to include the business, but also the people who have the customer contact in your team."

If you could turn the knobs at an insurer tomorrow, what would you do first?

"I would first map out what we have in-house in terms of AI and who is involved in it. Then I would draw up a process plan so that everything is aligned and finally I would put together that multidisciplinary team and especially look carefully at who should come from which department. In addition to important legislation on the way, awareness that ethics are important must be raised. Responsible AI literally has to get hands and feet. This can only be achieved with an organisation-wide, multidisciplinary approach."

(Text: Miranda de Groene - Photography: Ivar Pel)

Series of interviews on Ethical Framework

In the summer of 2020, the Covenant introduced the Ethical Framework for Data-Driven Decision-Making for its members. This framework is about open standards, in which each insurer must determine the precise scope in its own practice. In a series of four articles, insurers explain how they implement the Ethical Framework.

Missed? Read the stories of:

1. Athora Netherlands , which has drawn up its own algorithm register;
2. a.s.r, which has employed an ethicist;
3. Achmea, which, in addition to an Ethics Committee, has also developed an Ethical Wheel;
4. ARAG, which, in collaboration with Filosofie in Actie, is committed to an internal mission based on three questions: Is it possible? Is it allowed? Is it desirable?

Want to read more about the Ethical Framework and privacy? Take a look at our theme page Responsible with data.


Was this article useful?