Why good AI goes bad
Fears about walking, talking robots eradicating humanity are entertaining, but not a very realistic concern at this time. However, that doesn’t mean people shouldn’t have other concerns about artificial intelligence (AI). AI is a fact of life. And it’s normal to be a little uneasy about tensions between the technology and how it’s beginning to change society. An EU-funded research project called SHERPA was created in 2018 to understand the ethical and human rights consequences of AI and big data analytics, and to help develop ways of addressing those tensions. F-Secure, a partner in SHERPA, contributed its views on cyber security and AI as part of a new SHERPA study called “Security Issues, Dangers and Implications of Smart Information Systems.”
An important issue discussed by the paper is why AI can sometimes go “bad”. And one answer is…well…some AI just isn’t very good. Not good in an ethical sense of the word but the quality of the solution. The high demand for AI has seen it spread rapidly. Amazon, Microsoft and Google have all launched machine learning-as-a-service offerings to help more organizations cash in on AIs potential .
According to the study, this has helped bring the power of AI to many organizations that don’t necessarily understand the subtleties of the technology, leading to flawed implementations that don’t work as intended.
“Most current AI really isn’t all that smart. It does exactly what it was trained to do – whether that’s good or bad,” says F-Secure Vice President and head of F-Secure’s Artificial Intelligence Center of Excellence Matti Aksela. “Most common reasons for ‘bad’ AI are due to bias or errors in the data and applying models to situations they were not intended or designed for – so for example, if you train your AI system with biased data, you should not expect the bias to go away – the solution will by default replicate that same bias, which we see many examples of nowadays.”
The study outlines three main types of issues that occur when designing, training, and deploying AI.
Incorrect Design Decisions
Typically, human beings (as opposed to machines) create today’s AI models. They make decisions about what the algorithms should do, and how they’ll do it. And humans make mistakes, which means the AI they create can also make mistakes.
Some common mistakes outlined in the study include choosing the wrong features of a data set to analyze for a specific task, or choosing inappropriate or subpar architectures and parameters for a given model.
Ironically, the study highlights services that claim to detect fake accounts and bots on Twitter as an example of improperly designed AI.
Deficiencies in Training Data
Despite the vast surveillance apparatus embedded in modern technology, incomplete/limited data sets remain a challenge for AI. There’s a raft of quality issues involved with training an AI model. Improperly collected or processed data can often end up being used as training sets for a model. After all, doing these things properly can cost time and money that some businesses aren’t willing to invest. AI learns imbalances, biases, and assumptions along with everything else in a data set, which results in problems with AIs decisions.
There have been several blatant examples that could easily lead one to infer that if some of these AI models were people, they’d be a bunch of racist, sexist jerks. But this is exactly the type of problem that SHERPA is attempting to address.
Incorrect Utilization Choices
AI models are great for certain tasks. But that doesn’t mean it’s the magic bullet for every problem. In other words, an AI model designed to play Go – such as the AlphaGo program that made headlines in 2017 – can’t drive a car, or even play checkers. The input provided to the AI during these tasks is completely different. And so is the output.
How humans can help AI
For the time being, AI needs people’s help if it’s going to avoid embarrassing or horrifying mistakes. Part of SHERPA’s work involves conducting research for use by policy makers, programmers, and businesses.
While the study doesn’t offer easy answers, it does have some advice for those working with AI models. And it starts by breaking it down the problem into 4 different areas: understand the problem domain, prepare your training data, design your model, and implement production processes.
We’ve worked with our SHERPA partners to prepare tips for each phase. You can find crib notes for them by following the links.
“While AI can do some things better than humans, the AI we use now is very narrow, and hence current AI can also result in things that don’t really make sense at all,” says Aksela. “So applying a little human intelligence to the task of designing your AI and making sure your AI will actually do what you expect it to is usually a good idea.”
Categories