Skip to content

Trending tags

What is Enabling Breakthroughs in Artificial Intelligence?

Noora Hyvärinen

26.06.18 8 min. read

Hemingway’s Law of Motion states that change happens gradually, and then suddenly. And although his original quote referred to the process of bankruptcy, it can be applied to plenty of other phenomena around us, from the evolution of life on Earth to the explosion of technology and innovation we experience around us every day.

To us, change happens gradually and then suddenly because of its exponential nature, something that is intricately linked to a phenomenon called the Law of Accelerating Returns. This law posits, quite simply, that every innovation makes the next innovation easier to accomplish.

Wheat and Chessboards

To understand why an exponential change appears to us to happen gradually, and then suddenly, consider the wheat and chessboard problem. If a chessboard were to have wheat placed upon each square such that one grain were placed on the first square, two on the second, four on the third, and so on (doubling the number of grains on each subsequent square), how many grains of wheat would be on the chessboard at the finish?

It is simple enough to calculate the final number (18,446,744,073,709,551,615 grains of wheat, which would weigh 1,199,000,000,000 metric tons, or roughly 1,600 times the global annual production of wheat in 2014.)

What’s interesting about the wheat and chessboard problem is what happens as we move from the first half to the second half of the chessboard. Filling the first 32 squares would take just over 4 billion grains of wheat, which seems like a lot (about 279 tonnes of it, give or take), but it’s nothing compared to the second half. The 33rd tile alone would contain more grains than the entire first half, and the 64th tile would contain more than two billion times the number of grains on the first half of the board.

Moving from the first half to the second half of the board represents a point in time where an exponentially growing phenomenon begins to exert a significant economic impact. From the point of view of a new innovation or technology, this is the point where it becomes mainstream.

The term “Artificial Intelligence” was first coined in the 1950s, along with the first implementation of the perceptron – a structure that has gone on to form the basis of neural networks today. The idea of backpropagation, the mechanism that many types of neural networks use to adjust weights during training, was first proposed in the 1970s. And although research into machine learning and neural networks has continued (at varying degrees) over the last 70 years, news about breakthroughs in artificial intelligence, and services leveraging artificial intelligence have only happened recently. To use the analogy from above, with respect to artificial intelligence, it appears we’ve just entered the second half of the chess board.

Precursors Enabling Innovation in AI

New innovations are spawned when several precursors exist. For instance, today’s smartphones required innovation in batteries, screens, touch screen technologies, digital cameras, miniaturization of storage, low powered processors, radio technology, and supporting frameworks, such as manufacturing and software.

To understand what has propelled AI onto the second half of the chessboard, let’s take a look at the precursors that have enabled innovation in the field to happen at an increasing pace.

man and machine, f-secure, machine learning

Compute

To put it simply, the amount of compute required to do this stuff hasn’t existed until very recently. If we plot the number of transistors on an integrated circuit board between 1970 and today, we’d see yet another exponential curve. Between 1970 and 2000, that number grew by about 50,000. Between 2000 and 2010, the same number grew from 50,000 to 5,000,000,000. Today, the number is close to 20,000,000,000 transistors on an integrated circuit. Although there are theories that Moore’s Law is no longer holding for CPUs (due to the laws of physics), let us not forget that GPUs are the kings of compute for machine learning tasks, and that dedicated Tensor Processing Units are already being manufactured.

The rate of growth of the compute resources required to support machine learning is still exponential. And that exponential growth isn’t just limited to computing power – it applies to storage, power efficiency, cost, and size of components.

Cloud compute resources have enabled machine learning research by reducing the cost of entry to the field. Seriously powerful computing resources are available to the public, for rent, at low cost (in comparison to buying a lot of expensive hardware or having access to a university’s supercomputer). For instance, at the time of writing, you can rent a c5.18xlarge instance on AWS which contains 72 CPUs and 144GB RAM for $1.15 per hour, or a p3.16xlarge instance, which contains 8 GPUs (each pairing 5,120 CUDA Cores and 640 Tensor Cores), 64 CPU cores, and 488GB RAM for about $9 an hour (as spot instances).

Availability of Data

It is generally stated that roughly ninety percent of all data that exists was created within the last two years. Looking at figures from a year ago, it was estimated that roughly 2.5 quintillion (2,500,000,000,000,000,000) bytes of new data were being generated daily.

Training a machine learning model requires accurately labeled data, and the more samples you have, the better chances you have at creating a useful and accurate model. Smartphones and social media have vastly contributed to this pool of data over the past decade. This rich source of data has not existed for very long at all, and it has most definitely contributed to the birth of our current era of machine learning.

Quality of Tooling

If you’re using Python (which has become the most popular language for machine learning and data science tasks), you have a wide array of easy to use tools at your disposal. Let’s start with Jupyter notebooks, an interactive notebook that allows researchers to write and run code, and generate output (in many rich formats) directly within a simple browser-based interface. On top of that, you have access to mathematical frameworks (numpy), data manipulation libraries (pandas), and user-friendly implementations of standard scientific and machine learning algorithms (SciPy, Scikit-Learn). A large number of libraries exist that allow a researcher to quickly and easily visualize data (the most commonly used being matplotlib). Frameworks exist for creating and working with neural networks (Tensorflow, Keras, PyTorch). Support frameworks exist for certain types of tasks, such as OpenAI Gym, which provides interactive 3D environments and video game interfaces. All of these frameworks are being actively improved, and because they are all open source, they’ve been built to interoperate with, and reinforce each other.

This list of cheat sheets for various machine learning libraries and frameworks might be the best way to visualize how much tooling is having an impact on AI-related research.

Momentum

Proven results, coupled with a drastic increase in new avenues of research in the field of machine learning over the last few years have led to massively increased interest and investment in the field. As an example, about 12,000 new papers are submitted to arxiv.org (one of many repositories for scientific papers) every month.

On the back of recent successes using neural networks, researchers are revisiting other previously proposed techniques in the field of machine learning. For example, researchers at Uber recently demonstrated that genetic algorithms could be used to create models capable of playing Atari video games. These models had previously only been created using reinforcement learning techniques.

Hammer and Nail

Right now, humans and AI are both good at tasks such as identifying a cat in a picture, translating between two languages, and identifying and prioritizing strategies (e.g. chess, go, driving a vehicle). AI beats humans at tasks that involve the correlation of a large number of data points, and at highly repetitive tasks. Conversely, humans are better than AI at tasks that require creativity, the application of learnings from different fields, and making decisions in the absence of data.

We’re all aware of the fact that machine learning techniques are already being applied successfully in commercial fields, such as image recognition, speech recognition, language translation, and recommendation systems. But you might not realize that even the simplest of machine learning techniques, when used correctly, can be used to make sense of, or visualize data to expose trends or anomalies they would never have appeared to the human eye.

As machine learning techniques become more varied and powerful, and tools that incorporate machine learning techniques become more user-friendly, more and more problems are going to start looking like the nail to machine learning’s hammer.

Winter is Not Coming

While it’s true that artificial intelligence research has suffered a few “winters” over the last 70 years, the current momentum behind the field makes the possibility it will suffer another highly unlikely. Machine learning is the final frontier, and it’s with us to stay. Many businesses are waking up to this realization, and wondering how to prepare themselves to operate in an increasingly AI-driven world.

Here are our recommendations:

 

  1. Understand what machine learning and artificial intelligence are, how the various techniques work, and how they can be applied to your field.
  2. Start building competence within your organization.
  3. Create a data strategy – take a look at the data you work with, and how machine learning techniques can be used to make better sense of that data or to create automation.
  4. Apply machine learning to increase your organization’s efficiency.
  5. Integrate machine learning into your products and services, where appropriate.

 

Businesses that embrace machine learning are destined to become more efficient, as they free their people up to do things that humans are better at, which, in turn, will increase their efficiency even further. It’s pretty obvious that the way companies approach artificial intelligence today will have a significant impact on how competitive they’ll be in even a few short years from now.

This article was written by Andrew Patel, researcher, F-Secure’s Artificial Intelligence Center of Excellence.  

Noora Hyvärinen

26.06.18 8 min. read

Categories

Related posts

Close

Newsletter modal

Thank you for your interest towards F-Secure newsletter. You will shortly get an email to confirm the subscription.

Gated Content modal

Congratulations – You can now access the content by clicking the button below.