Episode 55: When AI Goes Awry
AI and machine learning are shaping our online experience, from product recommendations, to customer support chatbots, to virtual assistants like Siri and Alexa. These are powerful tools for enabling business, but powerful doesn’t mean perfect. For episode 55 of Cyber Security Sauna, F-Secure data scientist Samuel Marchal and security consultant Jason Johnson joined the show to talk about some of the security issues with machine learning and how to address them.
Listen, or read on for the transcript. And don’t forget to subscribe, rate and review!
Janne: Welcome to the show, guys.
Samuel: Hi Janne.
Jason: Hi.
You guys say that machine learning, in its current state, presents some security challenges. What are those challenges?
Jason: The one that I really focus on is a lot of variations, really, on this concept of these systems looking like they’ve got the right answer, but for reasons that don’t really generalize well, don’t really hold up out in the real world.
So you can have something that has superhuman performance on some task, but then turns out to be relying on completely ridiculous rules that break down as soon as, say, an attacker, or just maybe even random chance out in the real world, breaks some of those assumptions in a way that wasn’t anticipated by the developer.
Can you give me an example of something you’ve seen where that happened?
Jason: I think maybe the easiest one to understand is this concept of adversarial perturbation, which are very fancy words, but it’s a really common attack where you can very slightly modify the images that maybe a facial recognition scanner or something like that is operating on, to give totally ridiculous results. You can, for instance, 3D print a baseball that an image classifier will look at and say, “This is 100% certainly an espresso.”
Okay. Yeah.
Jason: Even if it’s in a catcher’s mitt, even if there’s context that says it’s a baseball, it’ll still be misunderstood.
Right. I’ve seen those pictures where there’s a couple of squiggly lines on a picture, and the AI says, “Yep, that’s a cat. That’s what a cat looks like.” So that’s what you’re talking about?
Jason: Exactly,
Samuel: Exactly. And well, actually, I totally agree with Jason. So one of the main reasons for this is that in contrast with a normal program, where you code basically the behavior, here the behavior is basically learned from data. And all this is learned, basically. You don’t exactly program the way that it will learn, so it will grab anything in the data that can be meaningful to make its decision.
But this is often not how a human would analyze an image and make the decision on what is in the image. The algorithm will just try to optimize this function and will take whatever features in this image that helps it make the best decision. And that’s why, actually, it may be completely fooled by a small perturbation, because it will put its attention on the wrong features that do not lead to this good decision that it must make.
So is this a problem that happens because of the way that these machine learning algorithms learn things? So it’s just something that happens, it takes a wrong turn somewhere, and that gets multiplied over and over again. So it’s not something somebody’s done to mislead the AI?
Samuel: Yeah. It’s inherent. You cannot force an algorithm to learn exactly what you want to learn. You just have basically an objective, it tries to optimize the objective, and it takes the path that it wants to optimize this objective. And it’s often what you don’t expect.
Jason: The real challenge is often that the objective you wrote into your code isn’t exactly what you wanted. And if you knew exactly what you wanted, you wouldn’t need to use machine learning to get your answer in the first place.
Samuel: Exactly.
So the security concerns we’re thinking about is that even though you might get an answer, you don’t understand how we arrived to that answer, and you’re not sure if the route taken was correct, and that all steps made sense?
Jason: Exactly.
Samuel: Yeah, exactly. And I think, well, this comes actually from this big involvement of data, basically at the input, to train the algorithm, and also at inference when you make those predictions and recommendations. And I think this is the main difference between machine learning models/machine learning algorithms, and traditional programs.
This increases the attack surface and creates new ways to attack systems, in my opinion. And well, these new attack vectors, let’s say, cannot be dealt with with traditional security. And you need a new solution, actually, to address those new attack vectors.
Jason: There are some different ways you can do these training exercises…design your models, architectures, and use different choice of algorithms and all these technical details that are less susceptible to these kinds of mistakes that give you more control over what sorts of features your model is learning from. But the problem is they perform worse. And they are maybe more understandable, but they can’t get the kinds of hype-generating, market-moving, really spectacular results that people are talking about.
Right. Okay, so I can get how the machine learning algorithm might go astray when it’s trying to figure out the best way to go someplace. But there have also been instances where the attackers have tried to twist the machine learning algorithm on purpose. Why would somebody do something like that?
Samuel: So I think you refer maybe to these poisoning attacks, where you would compromise all the training data or the training algorithm of the model, such that it will learn in a wrong manner, and basically provides a wrong prediction later when it’s at inference. So I think there are many motivations for this, because machine learning models are included already in many applications, and they will be included in more and more applications.
So for instance, if we take some examples of security applications, so machine learning models are already used in spam detection; in fraud detection like for payment fraud detection; for malware detection. So if you have some machine learning components that are part of your detection system, an adversary will basically try to compromise the system such that it would circumvent detection. So if you can compromise the model that makes the decision, “Is that a fraud or not,” or, “Is that a spam or not?”
So if you can compromise this model so that it will render the wrong decisions, the adversary will just circumvent the detection and achieve his first purpose. So, committing fraud, or sending a spam email that would get through to your mailbox. Then there are also other motivations for this. Now we see a lot of machine learning models that will be used in maybe those autonomous vehicles. So you can just launch an attack against a model that is used for image recognition in a self-driving car, just to cause disruption.
Right. Yeah.
Jason: Maybe a concrete example. Let’s say somebody is developing a new machine learning model to do malware detection. What they’re going to do is they’re going to take data from VirusTotal, and they’re going to say, “We’re going to train our model to recognize things that VirusTotal lists as malicious, because the experts have already taken a look at that,” and that sort of thing. That methodology sounds reasonable, right?
Sure.
Jason: So now as an attacker, what you might do is, let’s say you’ve heard about this marketing company and you’ve said, “I am going to build the malware that gets by this.” What you do is you cook up some totally benign strings, some gibberish syntax that doesn’t mean anything, but you’re going to put it in your malware. And you submit 15 million different examples of totally benign software that does nothing wrong, nothing harmful, that happens to have those strings embedded in it.
Now, if it’s a naïve model, it’s going to learn to recognize those strings assigned as a benign software, and when it gets your malware that happens to also have those strings in it, then it’s going to classify it as totally safe and harmless, regardless of what it actually does.
All right. So you called this a data poisoning attack. So basically, that’s me going on Amazon and looking at children’s books and looking at horror movies, and doing that a million times today, so that the algorithms think that people who like children’s books like horror movies as well. What other kinds of attacks are there?
Samuel: At least I consider that there are four main attacks on machine learning models. So first is this poisoning and backdooring attack, where you compromise the training data or the training of the model to compromise the integrity of the model in the end.
The second happens at inference time, which is this model evasion attack, where you… So the model is totally normal, but then you would craft inputs at inference that would get wrong predictions or wrong recommendations.
Then the third attack is called maybe data inference attack. And in this case, you don’t target the model, but you target the training data the model was trained with. So basically, during the training process, the machine learning model will learn about this training data. And if this training data is privacy-sensitive, actually, you can recover some part of this training data just by getting access to the model. So it compromises, basically, the confidentiality of the data the model is trained with, and you can extract some information just by interacting with the model.
Finally, the last attack is called the model stealing attack. So here we consider that the machine learning model has a value, and an attacker would try to replicate this model and use it, basically, for free. Or maybe propose it to customers. So also, just by interacting with the model, making queries, getting predictions, you can reconstruct a model that behaves in a similar manner as a model that you want to steal, and then you can get this model for pretty low cost and not having to basically pay the provider of the original model, that trained that model.
So those four attacks are, yeah, I think the most prominent that we have to care about, talking about machine learning.
So are all these things done through the user-facing interface, as if I was using that for whatever? Or does the hacker need access into the backend systems, or something like that?
Jason: It depends. I know that’s not a very satisfying answer.
It’s a consultant answer.
Jason: In a lot of cases, access to the backend will make the attack a lot more efficient, a lot more reliable. But in some cases, it’s also possible to reconstruct your… You can use these model recovery attacks, that we already mentioned, to recover a model that looks like, that behaves in the same way. And then you can use that model to construct the other kinds of attacks against the original one that you’ve copied.
How does that work?
Jason: The idea is first, you’ll build this copy, and what you’ll do to do this is use whatever user interface you’ve got, and you’ll train your model to just give the same answers as-
As the original model?
Jason: Exactly. And you won’t necessarily know everything about what the original model is doing, but you’ll build one up that behaves in the same way on the same kinds of inputs. And once you have that, then you can, for instance, craft your evasion techniques against that. And then chances are, they’ll also work against the original model, even though you never had access to it.
So I’m building an offline lab where I can practice my techniques?
Jason: More or less, yeah. It’s a pretty similar concept.
Yeah. Okay.
Samuel: Yeah. So I think actually, most of the attacks that are really happening now against machine learning models are done in this black box fashion, where you don’t have any understanding of how the model is doing, but you just try to guess, and actually, you can trick it. And I think there are actually examples of poisoning attacks that really work like this.
So for instance, Google has been reporting that they have the spam filter in Gmail, and regularly, they see attempts of poisoning this spam filter, just getting a lot of submissions of benign emails that are actually labeled as spam, such that you will just degrade the accuracy of the spam filter, and exhaust the user that will get all his legitimate email classified as spam. And maybe deactivate the spam filter, and then the attacker wins because his spam will just go through if you deactivate the spam filter.
So I think there are some examples of these attacks. And yeah, I think the attackers don’t know at all what kind of machine learning model they use to train the spam filter, but they just try. They put some wrongly labeled data and it works.
Yeah. Okay. You just do something and then see if it works.
Samuel: Yeah, exactly.
Any other examples, Jason, that you want to tell us? Cool stories of AI misbehaving?
Jason: Oh, sure. I think a good one that comes to mind is…This is just an academic study, a proof of concept, not somebody out in the wild doing something crazy evil. But the idea that they came up with was can we recover credit card numbers from language models that were trained on data that happened to include those credit card numbers, from data dumps or from random internet searches, or whatever? And the answer is yes.
They took the…If you remember Enron, their internal emails leading up to their business closing were released publicly, and that became actually a really popular data set for training language models on email, because they’re huge, they’re natural language, and they’re all well classified, and that sort of thing.
So they trained up a language model just on those emails, and then they tried to recover credit card numbers by seeing which 16-digit numbers were predicted as most likely. And because the language model had to memorize those in order to do well in training, it generally predicted a few specific numbers that were actually in the original dataset.
So you’d be asking the AI, “Give me any 16 number string that you can think of,” and it would just go to this first one, where that would happen to be a credit card number, because that’s a number it kept seeing over and over again?
Jason: Exactly. And in the real world, of course, you could just search the emails with Enron because they’re public. But if you had a data set that was trained on something private, and you had the developers thinking to themselves, “Look, it’s a machine learning model. It’s super opaque, everybody’s always complaining about that. They’re not going to be able to recover this data,” then that’s how you could use this sort of attack to recover credit card numbers against that. Or social security numbers and that sort of thing.
That’s crazy. So what does that mean? That means that as I’m developing my machine learning algorithm, I have to be very careful about what sort of information I’m exposing it to?
Jason: Yep.
Samuel: Yes.
Okay. So on this topic, if I’m now a company that is considering taking AI into use, what are some of the other things that I need to consider? We just established that you have to look at the data you’re training it on, make sure there’s nothing there that you’d want to leak inadvertently. What could the company stand to lose if their AI application is successfully attacked, I guess is what I’m asking?
Jason: Imagine a scenario where the system always makes the wrong decision. That’s your worst case scenario, more or less.
I like that. I like that.
Jason: A good reason to keep that in mind, as well, is that a lot of these systems will attack themselves in their own way. The models can learn solutions to problems you didn’t want it to solve that are just related to what you tried to construct it for. This is a concept called specification gaming. Again, very fancy word. But if you’re deploying these systems, you have to treat them as a monkey’s paw that’s going to do technically what you asked for, but never what you wanted. And you have to approach it with that kind of skepticism.
It’s one of those genie situations where be careful what you wish for.
Jason: Exactly. So for instance, we’ve had an example of a model trained to recognize cancerous skin growths. And it actually showed like 95% performance, it was super useful. And then when somebody took a closer look at it, they realized that what it was actually doing was recognizing whether there was a ruler in the picture or not, because that was apparently very common in the dataset.
(Laughing) Of course.
Jason: Because if somebody has gotten a ruler out to measure the growth, that implies that it’s probably actually malignant.
Oh, that’s insane. Any other sort of advice, Samuel, for example, you’d want to give to companies who are taking AI into use? We talked about making sure your data contains nothing that you wouldn’t want to leak. What’s something else that’s common?
Samuel: Well, first, I would advise to apply some of the basic security practices into data science. For instance, software engineers are trained, actually, for secure coding, but data scientists typically aren’t. And I think this would be a good thing, to train data scientists and data engineers for secure coding.
And also, when you plan to develop a machine learning-based system, just perform threat modeling on it. So this process of threat modeling the machine learning model and being aware of those machine learning-specific threats, machine learning-specific attack vectors that are different from traditional attack vectors against systems.
So I think this would be one of the first things to implement in the development and life cycle management of machine learning models. But I think in more than 90% of cases, this is not applied. So I think this would be a big step.
Yeah. Traditional security people need to learn about AI security, and AI security data scientists need to learn about traditional security.
Samuel: Yes. They must be put together, those people.
Perfect. That makes sense. Yeah. I guess what I want to know is, machine learning algorithms sometimes find routes that we didn’t think they would to get to the result. So is there a way to look at what the algorithm is learning and tell if it’s going in a good direction or a bad direction? I’ve seen cases where people in those autonomous vehicle races, their car just does something weird, and then somebody asks, “Why did it do that?” And they’re like, “I don’t know.” So can you tell if it’s going wrong or right, the learning?
Samuel: There are a few questions there. So of course, I think explainability is desirable for machine learning, and there is a whole line of research that tries to explain the decisions of even the most complex models by trying to explain, basically locally, some decision. But this won’t prevent attacks.
Because basically, when we talk about evasion attacks, those are just your examples. As something that never happens in the real world, clearly they are synthetically generated, this is something you cannot expect to see, and I think not something that you can stop if you don’t explicitly try to stop them, try to prevent them. Because yeah, they are generated using some clever algorithm, and you have, basically, to have the same intuition to create those examples, to prevent, to protect from them.
So explainability, of course, is desirable, but it won’t secure your machine learning model completely. It may help, of course. It may help.
Jason: Exactly. You can get more robust, but you can’t get completely perfect. The reality of the situation is that if you want to have a machine learning model that avoids some particular pitfall, you have to know what that pitfall is, and you have to go out of your way to actually prevent it from happening. So you would need to know in advance which ways these things can go wrong, or you have to train them up and see how they go wrong in order to understand what’s going on.
And this is, I think, one of those challenges related to the fact that if we knew how to actually do these things correctly, if we knew what the right answer was, we wouldn’t need the machine learning model to find the answer in the data in the first place.
Yeah. Okay. Okay, I guess that makes sense. Maybe what I’m wondering is, in traditional security, we have these best practices, like, “You should be doing this. You shouldn’t be doing that.” Is there anything like that in hardening AI or machine learning models against attacks? Is there any best practices that you can do to hope that it won’t go very, very badly wrong?
Samuel: I would say no.
Jason: Okay. I would say yes, actually.
Nice.
Jason: There are a few things that I would say that most developers really should be doing if it’s applicable to their model, but it is always going to be a matter of expertise for making that judgment call of, is it applicable?
So there is, for instance, a concept called differential privacy, which is designed to make sure that no single piece of information in your dataset is determinative of any given answer at time of inference. And the reason this would raise privacy is it means it’s not as practical to do some of these data reversal attacks in the first place.
The problem is, this naturally impacts the quality of the model, and it’s not a complete panacea against privacy attacks either. It has a very narrow definition of what privacy means.
Samuel: Yeah. So actually, I agree with you, Jason. But that’s the thing, that everything seems to be a tradeoff. And that’s why I don’t know if you can just give recommendations, because I think it’s use case specific, and you have to know what you want to optimize in the engine.
Sure. Sure.
Samuel: Because also, we studied, for instance, this vulnerability to poisoning and backdooring attack, and we see that basically, simpler models are more vulnerable to this basic poisoning attack that degrade the accuracy as a whole, while the more complex models are more resilient to these attacks. But on the other hand, the simpler models are very resilient to backdoor attacks, but more complex models are very vulnerable.
So here, it’s just, you have to know what you want to protect against. Do you use a simple model or a complex model to be resilient against poisoning or backdoor attack? Well, also, you have to just consider your primary objective to have good accuracy on this classification task.
So yeah, I think you have all these parameters, and it’s something that you need to tune and choose. Do you want privacy? Do you want robustness to evasion attacks? Do you want robustness to poisoning attacks? So I think all of these must be considered, and there is no just single answer or “Do this, do that, and do that, and you will have a good model.”
Yeah. Okay. That makes sense. So as you can tell from my questions, I’m struggling to wrap my head around this AI stuff. And I think a lot of people are. That’s why we’re talking about all these very concrete, tangible examples of what does that look like? How do you mean these things? Is there anything that you guys feel that people should know about? I’m not even just maybe talking about the man on the street, I’m talking about people who work, for example, in security. Is there anything you feel that security people should know about AIs, but they maybe currently don’t?
Samuel: AI models are vulnerable. They may not be aware.
That goes on a t-shirt. Yeah.
Jason: This is a complete tangent, but I’ve always been a fan of a mug that I’ve actually got right here that basically says, “Deep neural networks.” It has a flowchart from untrustworthy data through a bunch of question marks, down to infallible results.
(Laughing) Okay. Yeah. That makes sense.
Jason: But the real answer is that I think a lot of the major differences between traditional security and machine learning is that in the domain of machine learning, everything is probabilistic. You don’t necessarily have guarantees about cause and effect like you do in a traditional security situation.
That’s hard. If I’m the CISO of a company and my propeller heads are talking about machine learning this and AI that, I’d want to weigh in with some security advice, but nothing I’ve learned in school and in my career works here in this domain, so what am I to do?
Samuel: Well, I think that’s something that really can secure machine learning models more, is basically restrict access to data, being it training data or the inference data. So I think a machine learning model is pretty secure if it works in a closed loop, like in the backend, where it takes inputs that are produced inside your organization. And at inference, it’s the same. It takes input produced inside the organization that are very reliable, that you can trust, and the predictions are using this closed loop.
But I think most of the machine learning application that we envision now, and that we use them for, being it recommendation system, is just data from everywhere, thousands or millions of users it’s processed. And then it makes this recommendation based on this data, also, from thousands or millions of users. So of course those models are very exposed. But if you have some use cases where you use those machine learning models in this very closed loop, like in the back end, I think they are much less vulnerable, of course, if they are not exposed like this.
Jason: To add to that, I think the two questions I’d come back to the technical folks with are, “What are you doing to ensure things are safe, even if this model fails,” and, “Who’s responsible if the model fails?” I know that’s a harsh question, but when you’ve got humans who are doing these tasks, it’s the human who is responsible. You can talk to the human, you can ask what their decision process was. You can figure an answer out. It’s not about assigning blame and it doesn’t have to be, but you do still need to know how this is going to shake out in terms of responsibility.
How is that not assigning blame? We’ve just taught an entire industry not to blame people for their mistakes, and now you’re sitting there asking, “Who’s fault is it when this inevitably fails?”
Jason: The goal there shouldn’t necessarily be to assign blame to somebody for the machine failing, but to know who’s going to be able to step in and resolve the situation and improve things.
Okay. So the question is, “Who’s going to fix this when it fails?”
Jason: Exactly. So you’re not necessarily looking to blame somebody. You’re looking for somebody who is going to be able to step up and say, “I’m sorry, customer, that our machine rejected your application for no good reason. I’ve updated your account,” or something like that.
Yeah. Yeah. Okay. Okay. So a contingency plan to when the AI starts presenting horror movies to people who want children’s books?
Jason: Exactly. The last thing you want is the machine to have the last word.
Yeah. Right, right. And customer service people being on the phone, saying, “I can’t do anything. The machine says no.”
Jason: Exactly. That’s the situation you need to avoid.
All right. Maybe if we just still want to squeeze this in a nutshell, how are the vulnerabilities with machine learning different from traditional security vulnerabilities? Is there one of those nice cookie cutter answers that I can put on a t-shirt?
Samuel: I think one main difference is that if you discover a vulnerability to a machine learning model, you don’t necessarily know how to fix it.
Right. Okay.
Samuel: And often, you don’t know how to fix it. So, you know there is a vulnerability, but you don’t know how to fix it. So I think this is the big difference.
Jason: Yeah. Yeah. A lot of these are unsolved problems in the research community.
If I can tell that my AI is doing weird things and I can figure out that, “Yeah. Okay. It’s not looking for the tumor. It’s looking for the ruler that I have in my training material pictures,” is there anything that I can do at that point to steer it clear of that mistake? Or is it time to just burn everything and start from scratch?
Samuel: It’s a very difficult problem to address. Really, I think you can address it, but it’s not trivial. You need to research it, and you need, really, to put a lot of effort too. And once you have fixed this problem, there might be another one that you didn’t think about, that will be anyway there. So it’s very difficult.
Jason: Exactly. So for instance, in that case, you could reconstruct your data set to balance out the presence of rulers or not, so that’s no longer an indicator, right? And then you could train the entire system up from scratch again to get rid of that bias. But then you might have the problem that, “Oh, no. The model doesn’t work on people with darker skin colors.”
Samuel: Yeah, exactly. Very good example.
Jason: And you just have to keep cycling through these.
Samuel: Yeah. You will always have some bias in your data, and the model will always reproduce this bias. One famous attack that has gotten quite a lot of attention was this poisoning of this Microsoft AI chatbot, Tay, that basically was meant to interact with Twitter users, or tweeting, retweeting, and basically interacting like this. But it was trained with already existing tweets on Twitter. And so, since on social media and maybe on Twitter, we don’t see the most reliable or the best content, this chatbot became racist, sexist-
Just as horrible as everybody else on Twitter is.
Samuel: Exactly, in less than 24 hours. And it was taken down in that time. So this was maybe the most famous example of a poisoning attack. So people didn’t really know how the algorithm was working. They were just feeding bad tweets, and it was just integrated and learned, and then put back in the tweet that Tay was tweeting.
I don’t know. I would have wanted to be in that design meeting, where somebody says, “Let’s train an AI model to talk like people on Twitter by showing it how people on Twitter talk.” And I’m like, “You want to make the bot as much of a jerk as everybody else is? What is that all about?”
Jason: They must’ve just been in the very nice parts of Twitter when they were working on this.
Yeah exactly.
Samuel: The thing is, good people on Twitter, is they have real friends on Twitter they talk to. But people who go to talk to a chatbot maybe are not the best people.
Yeah. Maybe I just follow too controversial topics, because I’m seeing a lot of very, very toxic behavior.
Samuel: Another example that was also…So this was an attack launched by researchers, but against a real system, so against this Alexa. And basically, they showed that you can synthesize some sound that sounds like something. So…I don’t know, “I’m going to the store.” But there is also some hidden signal that would be interpreted by this language model that basically runs in Alexa to take on its commands.
So you can hide the messages in sound that sounds pretty normal to a user, but then to the machine learning model that interprets the sound, it will look totally different and launch some command that you don’t intend to.
Jason: And two weeks later, you got 22 tons of creamed corn showing up at your door.
Samuel: Exactly. Some nice order on Amazon has been passed.
Yeah. I know of an instance where somebody was sitting in a taxi, and it was election season, and there was a poster for a candidate with the number 120. And the car actually had this system that’s supposed to recognize traffic signs. And it was looking at that and said 120 kilometers per hour, must be on a freeway. That’s around what, 75-80 miles per hour, for you Americans. But yeah, it figured that this must be a freeway, because that’s the only place where you get those signs.
Jason: I’ve seen a similar example where there was a billboard with a politician that was saying something like, “Stop eminent domain abuse,” or something like that. And the word “stop” was in huge letters, so their car kept stopping, in the middle of a freeway this time. Sort of the opposite situation.
That’s not what you want…Yeah, again, I just want to explore that so much, because self-driving vehicles is one of my pet projects and I’m super interested in that. And one of the first questions I have is, “Why would that car not take the circumstances into account?” I’m on the freeway, there’s a car in front of me that’s going at top speed. Why would there be a stop sign on the freeway, when nobody else is stopping around me? Why would that be the case?
Jason: Because we don’t know to encode context into these things, basically.
Is it that simple?
Jason: All we can do is throw data in and hope that the answer comes out. So if we throw in a bunch of data that says stop at stop signs, which you’d think would be reasonable, then it sees the word stop, it’s going to think, “I’d better stop.” And there’s no way of really teaching it, unless you already have a situation where you’ve got stop on billboards and that sort of thing in your training data.
That’s disheartening, because you would want… I’m playing around with consumer-grade home automation right now. And I’m developing all these rules, like, “If this, then that.” But there’s a lot of, “Only if this also happens,” and, “Except when this is happening,” and stuff like that. So you’d think that…Because these are very simple rules and simple scripts, so you would think that a machine learning algorithm that’s so much more complex would be able to take that on board as well, but that’s not the case.
Jason: It could. Fundamentally, all of these machine model learning techniques, all they do is say, “Generate your own list of if, then, else, whatevers, all these conditions, and then we’ll tell you if it was a good list or not.” So we have to come up with those ways of actually encoding that contextual information into these things if we want them to learn it.
That’s a very interesting thing. So basically, it sounds to me like we’re talking about developing a machine learning algorithm that seems to be doing what we hoped it was doing, and then fuzzing it, just giving it all sorts of weird random things and seeing if anything happens to break it.
Jason: Yeah. And then, of course, the challenge is you have to figure out what should be the correct response in all those weird random situations.
Oh yeah. Yeah. We could start by trying not to crash and kill everyone in the vehicle. So, that would be a good place to start.
Jason: It is a good strategy.
So thank you very much.
Jason: Thanks for having us.
Samuel: Thanks. It was very nice.
That was the show for today. I hope you enjoyed it. Please get in touch with us through Twitter @CyberSauna with your feedback, comments and ideas. Thanks for listening. Be sure to subscribe.
Categories