Artificial Intelligence vs. Machine Learning vs. Deep Learning: What’s the Difference?
While artificial intelligence, machine learning, and deep learning are buzz words we hear everywhere these days, there are big misconceptions about what these words really mean. Many companies claim to incorporate some sort of artificial intelligence (AI) into their applications or services, but what does this mean in practice?
Broadly speaking, AI describes when a machine mimics cognitive functions that humans associate with other human minds, such as learning and problem solving. At an even more basic level, AI can simply be a programmed rule that tells the machine to behave in a specific way in certain situations. In other words, artificial intelligence can be nothing more than multiple if-else statements.
An if-else statement is a simple rule programmed by a human. Consider a robot moving on a road. A rule programmed for this robot could be:
if something_is_in_the_way is True:Â Â Â stop_moving() else:Â ââââ continue_moving()
So, when we talk about artificial intelligence, it’s more interesting to consider two more specific sub-areas of AI: machine learning and deep learning.
Machine learning or deep learning
Now that we have a better understanding of what AI really means, we can take a closer look at machine learning and deep learning to make a clear distinction between these two.
AI vs Machine Learning vs Deep Learning
- Artificial intelligence: a program that can sense, reason, act and adapt
- Machine learning: algorithms whose performance improves as they are exposed to more data over time
- Deep learning: a subset of machine learning in which multilayer neural networks learn from large amounts of data
Machine learning is not a new technology
What is machine learning? We can think of machine learning as a series of algorithms that analyze data, learn from it, and make informed decisions based on this acquired knowledge.
Machine learning can lead to a variety of automated tasks. It affects virtually every industry, from computer security malware research and weather forecasting to stock brokers looking for optimal trades. Machine learning requires complex math and a lot of coding to get the functions and results you want. Machine learning also integrates classic algorithms for various types of tasks such as clustering, regression or classification. We need to train these algorithms on large amounts of data. The more data you provide for your algorithm, the better your model.
When someone says they are working with a machine learning algorithm, you can understand the gist of its value by asking: what is the objective function?
Machine learning is a relatively old field and incorporates methods and algorithms that have been around for decades, some since the 1960s. These classic algorithms include the Naive classifier of Bayes and the Support Vector Machines, both of which are often used in data classification. In addition to classification, there are also cluster analysis algorithms such as the K-Means and tree clustering. To reduce the dimensionality of data and better understand its nature, machine learning uses methods such as principal component analysis and tSNE.
The training component of a machine learning model means that the model tries to optimize along a certain dimension. In other words, machine learning models attempt to minimize the error between their predictions and the actual values ââof the ground truth.
For this we need to define a so called error function (also called loss function or objective function) because the model has an objective. This objective could be to classify the data into different categories (for example, images of cats and dogs) or to predict the expected price of a share in the near future. When someone says they are working with a machine learning algorithm, you can understand the gist of its value by asking: what is the objective function?
How to minimize errors?
We can compare the model prediction with the ground truth value and adjust the model parameters so that the next time the error between these two values ââis smaller. This process is repeated millions of times until the model parameters that determine the predictions are so good that the difference between the model predictions and the ground truth labels is as small as possible.
In short, machine learning models are optimization algorithms. If you set them correctly, they minimize errors by guessing, guessing, and guessing again.
Deep learning: the next big thing
Unlike machine learning, deep learning is a young subfield of artificial intelligence based on artificial neural networks.
Since deep learning algorithms also require data to learn and solve problems, we can also call it a subdomain of machine learning. The terms machine learning and deep learning are often treated as synonyms. However, these systems have different capabilities.
Deep learning uses a multi-layered structure of algorithms called a neural network.
Artificial neural networks have unique abilities that allow deep learning models to solve tasks that machine learning models could never solve.
All recent advances in intelligence are due to deep learning. Without deep learning, we wouldn’t have self-driving cars, chatbots, or personal assistants like Alexa and Siri. Google Translate would remain primitive and Netflix would have no idea what movies or TV shows to suggest.
We can even go so far as to say that the new industrial revolution is driven by artificial neural networks and deep learning. This is the best and closest approach to true artificial intelligence that we have so far, as deep learning has two major advantages over machine learning.
Why is Deep Learning better than Machine Learning?
No feature extraction
The first advantage of deep learning over machine learning is the redundancy of feature extraction.
Long before using deep learning, traditional machine learning methods (decision trees, SVM, Naive Bayes classifier, and logistic regression) were the most popular. These are also known as flat algorithms. In this context, “flat” means that these algorithms generally cannot be applied directly to raw data (such as .csv, images, text, etc.). Instead, we need a preprocessing step called feature extraction.
In feature extraction, we provide an abstract representation of raw data that classic machine learning algorithms can use to perform a task (i.e., classify data into multiple categories or classes). Feature extraction is usually quite complicated and requires detailed knowledge of the problem area. This step must be adapted, tested and refined over several iterations for optimal results. Deep learning models do not need feature extraction.
Regarding deep learning models, we have artificial neural networks, which do not require feature extraction. Layers are able to learn on their own an implicit representation of the raw data.
A deep learning model produces an abstract and compressed representation of raw data across multiple layers of an artificial neural network. We then use a compressed representation of the input data to produce the result. The result can be, for example, the classification of the input data into different classes.
During the learning process, the neural network optimizes this step to obtain the best possible abstract representation of the input data. Deep learning models require little or no manual effort to perform and optimize the feature extraction process. In other words, feature extraction is built into the process that takes place in an artificial neural network without human intervention.
If you want to use a machine learning model to determine whether or not a particular image shows a car, we humans first need to identify the unique characteristics of a car (shape, size, windows, wheels, etc. ), extract these characteristics and give them to the algorithm as input data. The machine learning algorithm would then perform a classification of the image. That is, in machine learning, a programmer must intervene directly in the classification process.
This applies to all the other tasks that you will do with neural networks. Give the raw data to the neural network and let the model do the rest.
The era of big data
The other major benefit of deep learning, and a key to understanding why it’s becoming so popular, is that it’s powered by massive amounts of data. The era of big data technology will offer enormous possibilities for new innovations in deep learning.
Deep learning models tend to increase their accuracy with the increasing amount of training data, while traditional machine learning models such as SVM and the Naive Bayes classifier stop improving after a saturation point. .
Deep learning models scale best with more data. To paraphrase Andrew Ng, chief scientist of China’s leading search engine Baidu, co-founder of Coursera and one of the leaders of the Google Brain Project, if a deep learning algorithm is a rocket engine, data is the fuel.
This article originally appeared on Towards Data Science.