Skip to Content
The content on this page has been translated automatically.  Go to the original page.
Catelijne Muller-002.jpg

"People need to stay in control with AI"

She has no pedometer, no fitness app and is not on Facebook (any more). "I do participate on Twitter and LinkedIn, so I'm not quite in a cabin yet, but they are conscious considerations." Catelijne Muller is one of the speakers at the Fit for the Future Event. She is a rapporteur for the EU and has a clear opinion on AI: "Humans must be 'in command'."

The interview takes place in the company restaurant UP Town, which is located on the seventeenth floor of the UP Building in Amsterdam. Muller has an office there with the Alliance for AI (ALLAI). The view is fabulous. There are windows all around, so it doesn't really matter where you stand and look outside. "It's not entirely coincidental that we hold conferences here every now and then," Muller says with a laugh.
On the morning of the interview, the FD published an extensive article about artificial intelligence in which the alarm bell is more or less sounded. According to the doctors who speak, the strict privacy rules from Europe hinder medical research in our country and even put a brake on the development of various treatments. Moreover, we are falling behind America. Muller shakes her head when asked whether that is not also a bit her fault, as a member of the High-Level Expert Group on AI. "So that's the whole discussion. Some act as if it is a race, and we are lagging behind the US and China. They think very much in terms of The winner takes it all."

In conversation with

This is the eighth conversation in a series of interviews with an important stakeholder on a current theme. In this In conversation with ... Catelijne Muller has her say.

Muller is a member of the EU High Level Expert Group on AI and chairman of ALLAI (Alliance for AI). She is convinced that humans should be 'in command' with AI and also makes very conscious considerations in daily life. No fitness app or a pedometer for her. "That data is mine and I would like to keep it that way."

Previous interviews in this series have been published with Margot Ribberink (about the weather), Brenno de Winter (about ICT security), Mireille Hildebrandt (about Big Data), Marjolein ten Hoonte (about the labour market), Hans de Moel (about the climate), Theo Kocken (about pensions) and Edgar Karssing (about solidarity).

What's wrong with that? Dutch doctors and researchers now have to 'make do' with American data, because more is allowed there. I understand that it makes them grumpy.

"It is not only a wrong, but above all a dangerous discussion. I believe it immediately when doctors say that they can make better diagnoses with more data. Excellent. Gladly even, I would say, but within the rules. The possibilities for dealing with patient data within the rules are greater than people think. They often don't know what is and what is not allowed, and throw in the towel far too quickly. This means that we must make it clearer what is and what is not allowed. I always ask the question what people prefer: move fast and break things? Or move a little slower and fix things?"

You don't think we are lagging behind?

"No, on the contrary. People forget that the European economy is one of the largest in the world, with 500 million potential consumers. Moreover, and perhaps more importantly, we are currently setting the global standard for the ethical and responsible use of AI. Make no mistake, AI is a great technology that can do a lot of good, but then we have to manage it well. If we don't do that now, we will certainly have to deal with repressive regulations."

You are a member of the High Level Expert Group that is committed to the responsible development and use of AI. That sounds like a daunting task?

"It is. We are an independent advisory body to the European Commission and have been given two assignments. The first is to design ethical guidelines and the second is to make recommendations on investments and regulation."

"Technically everything is possible, but you always have to ask yourself whether it is necessary"

You started with ethics?

"That's right. Last year, our ethical guidelines were published. That's quite an achievement for a group that consists of 52 members who really come from all corners of society. Of course we have had the necessary discussions about what is ethical, but Europe calls itself a Union of Values and then you have to show that in your proposals."

What do you propose?

"We have come to three main conclusions. The first is that AI development and use must be in line with existing and future regulations. In addition, it must be ethically responsible and finally it must be technically and socially robust. It is difficult to draw very sharp lines between ethics and law, but one line is very clear as far as I am concerned. And that's what I call human in command. Technically, almost anything is possible, but you always have to ask yourself whether it has to be done. Humans must remain in control, in a technical sense, but also whether, when and how we use AI."

What do you have with AI yourself?

"I've always been a beta. Eventually, I ended up in the legal profession, but a few years ago I read a number of articles about AI. I was immediately interested, but also immediately thought: shouldn't we keep an eye on that before it goes wrong? I then delved more into the subject and, together with Robert Went from the WRR, wrote a report on the impact of AI for the European Economic and Social Committee. We arrived at eleven impact domains, including laws and regulations, ethics, weapons, work, transparency, education and democracy. The multitude of topics already indicates how much impact AI has on society."

How was your report received?

"It attracted a lot of attention. Immediately afterwards, the discussion started in Europe whether and how we should make policy."

AI is hot. Everyone knows more or less what it entails, but it remains difficult to explain. How do you do that?

"I find that difficult too. In fact, if you are looking for a good definition, there simply isn't one. Scientists do not agree. I always say that today's AI consists of smart, autonomous systems that can recognise patterns and make decisions based on a lot of data."

AI consists of systems?

"Yes, there is a big difference between Narrow AI and General AI. The narrow systems can do one thing very well, for example chess or the Go game. That system will not suddenly drive a car or make a medical diagnosis, for example. In principle, the general systems can do everything that humans can cognitively. I say in principle on purpose, because General AI does not yet exist. It simply hasn't been developed yet. I also don't believe that there will be robots that will show human behaviour. We have common sense, awareness and imagination. AI doesn't. Incidentally, the scholars do not agree on this at all. Some think that a machine can gain consciousness, but I'm on the side that says it can't."

How fast are the developments?

"Things are moving very fast at Narrow AI. An example is facial recognition. Automatic decision-making in particular is hot. And, to be honest, sometimes it can be very useful to come to a decision with AI. Think of replacing a meter in the city. Very useful. But AI can also be very decisive for a human, for example if I don't get insurance or miss out on a certain treatment. If you add to that the fact that it is unclear how such a system comes to a decision, it becomes quite tricky. Does that insurer look at Facebook? Do they see that you never go to the gym, but do go to the liquor store every week? It is simply not clear on the basis of which data the insurer rejects someone."

How is that possible?

"Today's AI is targeted. That means that people give it a purpose. Suppose I want to select as many people as possible, with the lowest possible risk of illness and as much certainty as possible that the premium will be paid. When I give the system that command, it also searches for that data exactly and filters out everything that is unclear."

I don't believe that there will be robots that will show human behaviour.

Do you get 'honest' conclusions?

"You don't know, because there are so many internal decision moments that it is no longer possible to verify how the decision was made. So we make decisions that we cannot explain ourselves. Take, for example, a municipality that is looking for potential benefit fraudsters. I roll out of the system, the municipality tells me and I ask: 'how did you come up with that?' 'I don't know', the municipality replies, 'that's what the system says.' And that is precisely where my big objection lies. The municipality is obliged by law to give me text and explanation, but cannot do so. Do we want to use those kinds of systems? Is that ethical? And is it legally allowed?"

Is the problem mainly in the predictive aspect?

"Yes, I often give the example of a system in the US that determined whether someone could be released on bail. That system turned out to be biased against black people. Nobody understood how that was possible, but it was used by judges. How is it possible that judges rely entirely on such a system? Surely judging is more than making a prediction. The systems are portrayed as if they can make a decision for you, but that is only possible in part."

How big is that part?

"I don't know. Unlike a system, a person can really perceive and, for example, recognise colours and smells. AI and the data support the tasks/function of humans. I do believe that AI can make people better. The most difficult game in the world, Go, was won by the computer. Then you think: end of Go, but that didn't happen. In fact, the professional Go players have started to look at the game differently and have unleashed analyses on the computer's moves. In the end, this has made the players better."

Which country is furthest along in the development of AI?

"The US is far ahead, in terms of consumers, but Israel is also quite ahead. In Europe, we are very good at embedded (robots) and business to business applications, the less sexy AI so to speak. And China is catching up. They would like to become the great AI leader and invest a lot, including in training."

"China is catching up"

Should we fear the Chinese? Or the Americans?

"No, because we are at the forefront when it comes to responsible AI. The discussion is still fresh, but as soon as Europe starts standardising, America and China are out of luck. They will really have to comply with our rules if they want to enter the market here."

Aren't we putting ourselves out of the game?

"No, why? Maybe then we will only bring in the good companies that do (want to) meet our requirements. We can already see it happening with privacy legislation that we are going around the world as a kind of positive oil slick. In the US, too, awareness is increasing and there is talk of (more) privacy. There are already states in the US, including San Francisco, where facial recognition is prohibited. We are going to have more discussions like that. Cars that scan license plates often scan much more than just the car. What if we build facial recognition into it and I accidentally walk past it? Will I be immediately compared in a database? Should we all want that?"

No idea. What do you want? For example, are you willing to hand in data?

"Yes, but consciously. I don't have a pedometer, but I'm on Twitter and I watch Netflix. So I share what is necessary, but I am alert. If I have to fill in a form for a new insurance, I understand that and I do it dutifully. But if they want to know if my heart rate is stable, how many steps I take and how high/low my blood pressure is, I pass. Before you know it, you're in a black box: there's something wrong with you, but we don't know what."

What is your biggest objection?

"That data is mine. If an insurer wants to do something with incentives, I have to keep freedom of choice. If they give me a discount on a sports pass or contribution, I can go or not. That's up to me. I think a pedometer turns the world upside down. All kinds of conditions then suddenly apply that the data belongs to someone else. And although you don't want to give permission, if you don't, you can't use that app either. I would only think it would be cool if you do get the counter, but decide for yourself whether you share the data or not, without consequences."

What should insurers do with AI?

"I'm not in the industry, but I can imagine that in principle they want to insure everyone. They must also keep that principle of solidarity at the forefront of AI. If they can know everything about me - how often I go to a gym, how many steps I take, how high my heart rate is, how stable my blood pressure is - does an insurer still comply with the principle of solidarity? As an insurer, you have to stick to your core values and put your money where your mouth is. I think they do that too, by the way. I haven't yet received an offer for a pedometer in exchange for a discount."

Why did you create the Alliance for AI?

"To keep these kinds of discussions going. In Europe, a strategy has been developed that focuses on economic developments, but also on the ethical, legal and social impact of AI. Ideally, I would like to involve the entire society. It is not a story for technicians, or just the government. AI concerns us all. What does AI mean for the trade union movement? And what happens to (the content of) our jobs? Awareness of what AI can and cannot do is so important. Moreover, the people who have to work with it must also understand it. It is great that there are ethical guidelines, but do the municipalities and insurers also understand it. Does such a municipality know what to do? Does an insurer understand what it means for its organisation? The translation of what responsible AI is and means is in the law and in the ethical guidelines, but in practice we still have to make it."

(Photography: Ivar Pel)

Ideally, I would like to involve the entire society in AI.