Machine learning
Machine learning is divided into supervised learning and unsupervised learning. In supervised learning, label data is given to the machine, which then uses this data to build the model of it and use it on newly seen data to make predictions or label them. The example of this is image recognition, where the machine can learn how to recognize images and output what it sees (for example: is it a dog or a cat). In order to build a model, we need to provide it with a lot of training images of dogs and cats (explicitly saying this is a dog, that is a cat). The other technique is unsupervised learning, and it operates on raw data (not labeled) and is able to find hidden patterns or anomalies inside the dataset, or it can be used to split into clusters of a similar point.
Deep learning
Deep learning is a particular kind of machine learning that achieves great power and flexibility by learning to represent the world as a nested hierarchy of concepts, with each concept defined in relation to simpler concepts, and more abstract representations computed in terms of less abstract ones.
It is based on neural networks, which are the artificial representation of the human brain. Neural networks are composed of artificial neurons layered in many levels and interconnected resembling the structure of our brain. These neurons and connections between them are configured in the training phase based on the dataset given to it.
One of the great challenges of Machine learning is feature extraction where the
Deep learning refers to artificial neural networks composed of many layers (called hidden layers).
Deep learning is especially effective in image recognition, which is due to its ability to extract and abstract features. For example, recognizing a face in a photo has many layers of perception: recognizing eyes, hair, ears, etc. The network is trained by exposing the process to a large number of labeled examples. Errors are detected and the weights of the connections between the neurons adjusted to improve results in the process called backward propagation. The optimization process is repeated to create a tuned network. Once deployed, unlabeled images can be assessed based on the tuned network.
Deep learning models have been very effective in complex tasks, such as sentiment analysis and computer vision. However, Deep learning algorithms, due to their slow learning process associated with a deeply layered hierarchy of learning data abstractions and representations from a lower-level layer to a higher-level layer, are often prohibitively computationally intensive.