Skip to content

Trending tags

Episode 34| Balancing AI: Privacy, Misuse, Ethics and the Future

Melissa Michael

29.01.20 26 min. read

Tags:

While AI and machine learning are enabling definite advances in the digital world, these technologies are also raising privacy and ethical concerns. What does AI mean for personal privacy, and is it being exploited unethically? Are these concerns being addressed, or will AI spell disaster for society? Bernd Stahl is coordinator of the EU’s SHERPA project, a consortium that investigates the impact of AI on ethics and human rights. Bernd stopped by for episode 34 of Cyber Security Sauna to discuss the delicate balance of AI – its advantages and disadvantages, potential misuses and how AI may improve life and create opportunity for some, while others may be hurt by algorithmic biases and unemployment.

Listen, or read on for the transcript. And don’t forget to subscribe, rate and review!

ALL EPISODES    |    FOLLOW US ON TWITTER

Janne: So Bernd, how would you frame the work that the SHERPA project is doing?

Bernd: SHERPA is trying to explore which ethical issues arise due to the use of AI. We’re looking at human rights components in a variety of ways, and we are, as part of the overall work of the project, trying to explore which options of addressing possible ethical and human rights issues exist, which ones of those are important, and which ones of those need to be emphasized. Overall we hope to come up with a set of recommendations and proposals for the European Commission, but also for other stakeholders, that will help them deal with any issues that they may encounter.

Okay, that’s interesting. And I understand that F-Secure has a role to play in this consortium. 

Yes, indeed. F-Secure is one of the partner organizations and is looking specifically after the technical side of AI. So we have a wide range of areas of expertise. We have people from philosophy, from the social sciences, we’ve got people to do standardization, and F-Secure is the representative of the technical community there.

Okay. Now, I’ve heard a lot of conversation about the privacy effects of AI and machine learning, but I think the human rights aspect might be a little bit less known. Can you just briefly introduce us to what kind of human rights issues there might be in this technology?

Yes. So, there is a very close overlap between ethics and human rights in many respects. So, for example, you mentioned privacy and data protection. Well, privacy is also a human right. According to the European Convention of Human Rights, privacy is one of them. And from our perspective, we try to explore where in other areas human rights might be affected by the use of these new technologies. And examples might be the human right to health, which may be affected through the use of AI in healthcare; or freedom of speech that may arise in the context of social media and monitoring of social media.

So there are a broad range of ethical and human rights issues which are not easily distinguished. There is a lot of overlap between them. But human rights, of course, have a different legal standing from ethical concerns.

Bernd Stahl, SHERPA project, speaks with F-Secure

Bernd Stahl

Okay. And on top of that, all the privacy concerns you mentioned. Do you expect these concerns to develop in the future – does this technology amplify our concerns about personal privacy? 

Yes, I think it’s likely to exacerbate what we already have in terms of issues and problems. So we realize now that data protection is a key concern that, in a modern technology-enabled society, we need to deal with. We of course are trying to do that. We have legislation, the General Data Protection Regulation, for example. But going forward, with the increased use of big data analytics, machine learning being run over large amounts of data, it is very likely that these concerns will become more important, they’ll become more pronounced, partly because the way in which these technologies can influence our lives will change, will improve, or in some ways may improve, but it may then also have down sides with regard to the rights of data subjects. So there may be unintended consequences, and privacy is a key concern in this whole environment.

Do you see the concerns increasing about this technology being more a side effect of our better understanding of the technology, or as the technology gains wider usage, some of that use may not be as ethical as initially?

Well, I think both probably. So on the one hand we have a number of technologies which generate data. That’s not just AI. There are loads and loads of technologies which create more and more data, be those social media, Internet of Things, applications. So there are lots of technologies that create new data, some of which has components that are relevant for privacy concerns.

So on the one hand there’s this increased amount of data. And of course if you have more data you can draw more inferences, you can learn more from the data, and as a consequence, there may be new or worsened issues or worries about how that data is used. That is not necessarily a question of misuse or malicious use, but it can also be of course that. So if more data is available, then there is of course also more of an opportunity for misuse.

Let’s talk about the data a little bit, because a lot of the research and development side of this technology is dependent on the researchers’ access of data. So do you feel that this puts AI research in direct conflict with privacy necessarily?

Not necessarily, but certainly potentially. So of course not all data that you use for AI applications is necessarily personal data. You can use technical data, you can use research data. There are lots of bits of data where privacy isn’t a major concern, which may not be personal data in the first place, or it may be fully anonymized, in which case it may also not be a concern. But yes, there is the possibility through the use of more and more data that new ways of using that data emerge, which have privacy risks and which impinge on people’s rights to keep their environment or human life private, safe, secure, and separate from external intervention.

Okay. There’s a term about machine learning that I sometimes come across – algorithmic biases. Can you tell me what these are, and how do they affect our lives?

I think the term algorithmic biases is used to denote instances where automated decisions about humans are made on the basis of skewed data or data that isn’t appropriate for the decision that is being made. So the classical or widely cited examples are those where people are discriminated against on the basis of their skin color, or where human resources systems distinguish between applicants on the basis of their gender.

So the point of algorithmic biases is that people are being discriminated against, because either in the data set that was used to develop the AI model, or somewhere in the algorithm itself, there are factors that influence the decision making in such a way that it disadvantages individuals on the basis of nonrelevant criteria, such as gender, race, age and so on. And that’s a key concern, and I think mostly because it’s often very difficult to understand whether these biases exist, and what exactly the nature and the consequences of them is.

Right. I think I heard an example where there was this model that was supposed to guide police patrols, and it was based on AI-derived data of previous incidents and crimes taking place in that area. And the argument that was made about the data being biased was that the original data, where police were finding these incidents and crimes, was sort of biased to begin with. So it was guided by the police officers’ maybe inherent biases of where they want to be at different times, and what kind of people they look for when investigating or discovering crimes.

Yes. So the bias is often not something that is created from scratch through the AI, the bias is already there in the data. And the example you gave of police patrolling in neighborhoods which have been shown as being high crime, that can of course lead to self-fulfilling prophecies. Because the police think there’s a lot of crime in the area, therefore the police patrol more in the same area, and therefore they detect more crimes there.

This sort of environment, then, is often also one where other existing well-established prejudices already exist. These neighborhoods would often be poor neighborhoods, they are disadvantaged neighborhoods, they are often certain ethnicities. So existing disadvantages can be exacerbated, can be reproduced through these technologies.

All right. Unintentional bias aside, do you have any examples of AI being used unethically?

Well, there are plenty of those. A very high-profile one which has been in the news is the use of AI for influencing the previous election. The Cambridge Analytical scandal, that’s an example of a use of big data analytics through machine learning that was meant to influence the outcome of an election in a way that was generally seen to be inappropriate. So that’s a very high profile one, but there are lots of other examples where AI is now being used where people are concerned about it.

The use of AI, for example, in autonomous weapons is another big one. Many people are concerned that machines should not be in a position to make decisions about who should be killed. So there are some high-profile aspects and then there are of course lots of smaller ones where it’s very difficult to see whether there are ethical issues, but where possibly you might be able to find them if you look in more detail.

Your example of autonomous weapons was an interesting one. There seems to be this arms race between companies and countries. Like, even though Europe has banned research into autonomous weapons, China for instance may not have. So is there that sort of arms race going on between entities that are bound by ethical concerns and entities that my not be?

Well, there certainly is a lot of competition. On the worldwide level between companies, but also between countries, between regions in the world. And because of the big hopes associated with AI, there’s a lot of investment and there’s a lot of drive to outperform the competitors. And that, I think, may have benefits, so hopefully it’ll lead to better technologies which can provide better services. But it can of course also have downsides, in that some competitors feel that they’re in a position to overstep boundaries which others may not.

So I think on a global level if you look at questions like the use of lethal autonomous weapons systems, the solution there would probably be in some sort of international agreement. There are lots of international agreements to govern certain types of technologies, let’s say land mines, for example, or chemical weapons, and I guess on an international level, a similar agreement around lethal autonomous weapons systems is conceivable. And that would hopefully mean that there’s a level playing field again in that there would be a global agreement on what the standards are. That’s certainly not going to happen everywhere in all sorts of applications, but I think it’s very possible with regards to some of the high profile issues.

Right. But international treaties only work if there are significant barriers of entry into the research, like if it’s costly enough, or if the availability of researchers is limited enough so that, for example, a terrorist group cannot whip up their own AI research. Is that the case in these technologies?

Well, I think it’s very comparable to, let’s say, chemical weapons. A chemical weapon is not necessarily a high-tech, high investment type of technology. You can cook up poisons in your kitchen and kill people with them. And I think something similar is also conceivable with AI. So a terrorist group might be able to develop a robot that goes around shooting people. But that’s, I think, very different from having whole large armies equipped with lethal autonomous weapons. Where you would then have sort of a Star Wars-y battle of the droids.

So I think there are very different levels of that. And the fact that somebody may misuse a particular technology for whatever purpose doesn’t mean that one shouldn’t try to regulate it anyway.

Oh no, absolutely. And I think your comparison to chemical weapons was an apt one. But I’m also thinking, if AI brings these massive advantages to companies that are investing in the research, is that going to polarize the market into the companies that have true AI technology, the big ones, maybe – versus the others that don’t?

There certainly is a lot of concern about that. So the big internet companies that have the data, that have the huge compute facilities, they are clearly advantaged. They can do things that a small or medium sized company would find very difficult. So I think there are different ways of trying to address this.

The European Union, for example, is trying to push for the availability of data sets which companies could use in order to develop their technologies. There’s also a lot of pressure on the big internet companies. So if you look at the current American election campaign, several of the candidates actually claim that they will break up some of the big internet companies, so they’ll break up the Googles of this world, basically for anti-competition purposes. And I think one of the reasons why that is a prominent thought is that, in particular, AI and related technologies seem to strengthen the entrenched players even more than they already are.

So the big ones get bigger and the small ones wither away.

Yeah, so I think the market mechanisms are likely to have this effect, which is why I think this is not an issue that can be soft on the market level. This is something where intervention, regulation, antitrust measures and so on, will have to come in.

All right. What about the polarization between threat actors, the unethical users of this technology, and the defenders, the people trying to make sure that the use is ethical or even beneficial to security of people?

Well, I’m talking to the experts now – you know a lot more about this than I do. But my understanding is that at this point the threat isn’t enormous, mostly because the criminal elements have other ways of creating their revenue and they don’t need to wait for AI to do that. Now that may very well of course change – the more broadly these technologies get spread, the more likely it is that someone is going to figure out a way of benefiting in an criminal, immoral way from them. My understanding is that at this point that it’s not a major concern yet, but it may well be in the future.

Okay. Do you think we should be clearer on what is acceptable behavior in using AI and what isn’t?

Yes, I think that is a relevant question we don’t have a lot of agreement on yet. And that goes back to what we talked about earlier, things like the market power of the big internet companies. But it’s not just them. There are lots of other examples where these technologies can support particular activities which may be beneficial, but maybe also not.

To give you an example, if you use these technologies in agriculture. There’s a lot of research going into how AI could be used to improve and optimize agricultural yields. Now that requires a lot of high tech equipment. You need to collect the data, you need to have weather data, soil data, machine data, and you can then build models which allow you to optimize the crops, the planting, the harvesting and so on. Now that sounds like a really good idea, and it may well be something that humanity will require in order to feed humans in the next 50 years or so. But at the same time, it may also have the downside of locking in farmers or pushing farmers out of the market who cannot afford this technology.

So then it comes back to questions of market power, of financial power, of being able to influence who does what. And I think those are really the interesting questions, in terms of what can we do, what should we do, and who should be allowed to do what, with which data? So I think the interesting and difficult issues are not the mafia taking an AI tool to hold a government for ransom – if they do that, from an ethical perspective that’s fairly boring, they shouldn’t do it, end of story. Whereas I think the more interesting question is where in the current power relationships between different market actors or state actors, where do we think are the boundaries of what people can and should be doing?

So do you think societies should take an active role in that, in controlling the use of big data and AI, to make sure that they aren’t being used unethically and that they aren’t giving sort of unfair advantage to one party? 

Yes, I think societies are already doing that, to some degree. The General Data Protection Regulation is an example of society intervening to restrict what can and cannot be done. We have other types of legislation already around competition, around liability, and I think the question would be, to which degree are the existing rules and regulations fit for purpose? To which degree are they good enough to govern what we are doing in AI? And I think a lot of them probably are doing okay. But there may well be aspects where we would say we need to go beyond what’s currently there. In which case, then societies need to come up with ways of finding solutions. And they may be low level ones, individual guidance, documents for developers; they may be industry standards; or they may be laws and regulations.

So do you think the decision makers are knowledgeable enough about these technologies to be able to make informed decisions about that?

Well, probably some of them will be. Some of them won’t be. And I guess that’s a general problem that you have with the political governance of emerging constellations and emerging technologies. So I think if you look at the average national parliament, or European parliament, you probably won’t have a lot of AI experts in there. But then the question is how much expertise in the technical side of AI do you need in order to be able to have a sound understanding of what the consequences of a particular regulation would be?

And of course, there are areas of expertise – I mean, the SHERPA project is an attempt to bring together knowledge from a variety of different sources to better understand what the social implications of these technologies are, and there are lots of other initiatives in this space. So I think there’s a lot of knowledge out there, and the trick would be to communicate that to the right decision makers in the right way.

No, absolutely. But there’s already been concerns about some legislation like the GDPR, that limits personally identifiable information. That is like a subset of the data that you could use to inform you AI and train your AI, so if we are limiting access to that kind of information, are we giving the advantage to market areas where such limitations do not exist?

That’s a well-established argument, that by legislating we restrict our ability to innovate, and I wouldn’t say that can never happen, so I think it’s possible that there would be examples where you could find that, but on the other hand, I think the principle that we want to retain human rights, that we want to make sure people enjoy their freedoms including the freedom to privacy, is one that society in general buys into. And I think on the international level, it’s interesting to observe that while there’s a lot of criticism of the GDPR, there’s also a lot of support, including from companies outside of Europe. For example, Apple said very prominently that they comply with the GDPR. Now whether that’s true or not is a different question, but they have seen the value in signaling to their audiences that they take privacy seriously, and the GDPR is a way of doing that.

So I think it can cut both ways. It can also be a resource of creativity. By working in an environment such as the European environment where data protection is well regulated, companies have to find solution which will still work in terms of the services they provide, but which are privacy sensitive and do not make unnecessary use of personal data.

I like your optimistic interpretation of Apple’s actions there, that you feel that they are sort of doing this because they just want to do good in the world, and not because it’s easier for them to comply globally with something that they’re required to comply with in a specific area.

Well, I’m not sure whether an organization like Apple has an intention. I think there are lots of people working for Apple. I do think it’s a combination of course. On the one hand, Apple probably finds it easier to just have one set of standards, and that’s done for everybody. But of course they will also then spin the narrative that they are actually doing the right thing. And I guess it’s going to be very difficult to unpack whether one is true or the other is true or whether they’re both true. So I am happy to accept that they, and of course, not just Apple. That companies take legislation seriously, partly because they have to comply or they’ll be punished otherwise, but also partly because they understand that it’s something that society wants and therefore they should comply with it.

Well, true. I mean, if you have to do something, you might as well try to get some PR benefit out of it. And the more companies that talk about taking these things seriously and taking them to heart, the more they’re setting an example for everybody else.

Yes, yes. And then hopefully at some point we get over this argument that regulation impedes innovation, and say well, actually it does more. It creates an environment in which technologies are developed that are fit for purpose that support human flourishing, rather than simply can be flogged for lots of money to the detriment of the general population.

On that upbeat note, I want to talk a little bit about the future. Are we confident that the concerns about AI that exist will be responsibly addressed? Will AI be the ruin of society or will it improve the world we live in?

Well, I think like most things, a bit of both. I don’t think AI is going to solve the ails of the world. If you look at the major problems that we face at the moment, from global warming to increasing inequality in countries, between countries, these are big social and environmental problems where AI may have an influence on aspects of them. But overall, I don’t think AI will be in a position to solve them. So these are much bigger. AI may make a positive contribution, but AI may well make a negative contribution to any of them.

For example in terms of environmental sustainability, AI can be used to optimize processes, thereby save energy, thereby contribute to environmental sustainability. But of course, it also relies on huge amounts of computing power, it requires technology, it requires energy. So it’s very difficult to balance out whether the net effect is positive or negative.

And I would argue that something similar could be said for most other aspects. In terms of, say, participation in society. Now it’s quite possible that AI will allow people to live a more fulfilled life, to participate in society in some cases, but it can also exclude other people for a variety of reasons, including algorithmic biases or political interference. So I don’t think AI has an intrinsic value of being all positive or all negative. I think it will have both types of aspects and both types of consequences.

Are there employment concerns among the ethical questions that need to be addressed?

The short answer is yes. Clearly these technologies have an influence on employment in general, and in particular employment of specific groups and professions. One of the first major government intervention in the discourse, this was a report that was written for the US president in, I think, 2016 or 2017, that focused specifically on questions of employment and gave the example of autonomous vehicles. So in America, according to this report, about 2.5 million people make their living driving cars. And if autonomous vehicles were to be employed as a general rule, then many of these would lose their jobs.

But that’s just one example. There are many other examples. There are worries that AI would lead to job losses in more white collar professions, for example the legal profession, where a lot of the legal research that currently is undertaken by lawyers, by people who work for law firms, could be outsourced to a machine that could do it much quicker and much more thoroughly.

So it’s very clear that AI is likely to have consequences for employment for some people. And that is something that is seen as a political issue but clearly also as an ethical concern because if you lose your job, if you lose your livelihood, then that is a problem.

I think these same concerns were raised during the early days of the Industrial Revolution, and what turned out happening is that the machines certainly took away some jobs, but then new ones emerged.

Yes, and that’s the argument that proponents of AI make right now as well. The question of the net effect, whether there’s a net positive or net negative effect, that’s very difficult to answer, because it’s very difficult to see how many jobs actually were lost and how many were created.

I would tend to agree that it’s probably not going to be a net negative. So there will be at least as many, maybe more jobs created than are lost. But, and this is where the ethical component comes in again, it’s probably not the same jobs for the same people. So that means that some people will lose out, whereas others will gain. And as is often the case, the people who are likely to lose out are the ones who are already on the weaker spectrum of society, whereas the winners are the ones who are already strong. So the people who are going to win are going to be the AI professionals, the highly qualified computer scientists, whereas the people who lose out are the people who have less secure employment, who have less well-paid employment, such as for example, your taxi drivers, your bus drivers, and so on.

So I think the ethical issue is not so much the net effect but it’s the question of distribution of who wins, who loses, and how do we create a way of balancing this and providing mechanisms to ensure that the people who lose out still have a way of gaining a livelihood, are looked after.

Yeah, that’s interesting. Because it seems to me that machines are good at doing monotonous, repetitive tasks. And certainly those are the kinds of jobs that we as human beings don’t maybe enjoy doing that much. So, you know, the machines are welcome to those jobs. But on the other hand, like you said, maybe the people holding those jobs currently do not have a lot of job prospects outside of those industries.

Well that’s part of the problem. But I think specifically with AI, the worry is that the replacement of jobs will not only be the boring, repetitive and dirty and heavy jobs, but it will move up the food chain in the employment sector. So it will move into management jobs, it will move into professional jobs, because AI has different capacities and can function in different ways.

In the Industrial Revolution, the steam engine just made it easy to carry heavy loads and transport things on the railway. When industrial robots came in, they could lift things and manipulate them in a much more significant, nimble way. But it was still mostly physical labor that was replaced. With AI, the worry is that the labor that will be replaced is actually white collar, intellectual labor. And of course that will then mean that the way we deal with these questions is going to be different. So how do we train people up to be able to take the new jobs that are coming up, how do we motivate people to do that?

Sure, but there’s also monotonous tasks in the white collar industry. I’m being asked by my supervisor to provide a forecast and budgets every now and then, and that’s a task I could happily sign off to an AI. 

Yes, yes. And there will be lots of people who win from this. I do a lot of stuff myself all day long which I think an only halfway intelligent machine would be able to do much better than I do, but I still have to do them. So I think there will be lots of people who gain from this. It may make our lives much easier in many respects. It comes back to the question of balancing. How do we balance the advantages with the disadvantages?

That makes sense to me. I do want to thank you for taking us through this topic today and explaining sort of what AI is, and what we should and shouldn’t expect from it. Thank you very much, Bernd.

Thank you very much for the invitation.

That was our show for today. I hope you enjoyed it. Make sure you subscribe to the podcast, and you can reach us with questions and comments on Twitter @CyberSauna. Thanks for listening. 

 

Melissa Michael

29.01.20 26 min. read

Categories

Tags

Highlighted article

Related posts

Close

Newsletter modal

Thank you for your interest towards F-Secure newsletter. You will shortly get an email to confirm the subscription.

Gated Content modal

Congratulations – You can now access the content by clicking the button below.