Monday, September 28, 2020

Deep Learning & How It's Work - Every Thing You Should Know

What is Deep Learning

Deep Learning is a machine learning technique that constructs artificial neural networks to mimic the structure and function of the human brain. In practice, deep learning, also called deep structured learning or hierarchical learning, uses a huge number of hidden layers -typically more than 6 but often much higher - of nonlinear processing to extract features from data and transform the data into different levels of abstraction (representations).
As an example, assume the input data is a matrix of pixels. the first layer typically abstracts the pixels and recognizes the edges of features in the image. The next layer might build simple features from the edges like leaves and branches. The subsequent layer could then recognize a tree and so on. The data passing from one layer to the next is considered a transformation, turning the output of 1 layer into the input for the next. Each layer corresponds with a unique level of abstraction and the machine can learn which features of the data to place in which layer/level on its own. Deep learning is differentiated from traditional “shallow learning” because it learns much deeper levels of hierarchical abstraction and representations.

Evolution of Deep Learning

It’s the most valuable development in the world of AI in the present time. But rather than trying to understand the details of the field which could lengthen this article a little too much. Let’s just take a look at some of the main developments in the evolution of deep learning.

Although the study of the human brain is thousands of years old. the first step towards neural networks took place in 1943.

In 1943

When Warren McCulloch, a neurophysiologist, and a young mathematician, Walter Pitts, wrote a paper on how neurons might work. They modeled an easy neural network with electrical circuits.

In 1958

Frank Rosenblatt creates the perceptron, an algorithm for pattern recognition based on a two-layer computer neural network using simple addition and subtraction. The perceptron computes a weighted sum of the inputs, subtracts a threshold, and passes one of two possible values out as the result.

In 1980

Kunihiko Fukushima proposes the Neocognitron which is a hierarchical, multilayered artificial neural network. It's been used for handwriting recognition and other pattern recognition problems.

In the 1980s-1990s

John Hopfield presented a paper to the National Academy of Sciences. His approach to making useful devices.
Joint Conference on Cooperative/ Competitive Neural Networks at which Japan announced their Fifth-Generation effort resulted in the US worrying about being left behind. Soon funding was flowing once again.
Deep Learning was introduced to the machine learning community by Rina Dechter in 1986.
Yann LeCun’s invented the machine that could read handwritten digits. This invention felled beneath the wider world’s radar. While the algorithm worked and it required training for 3 days.
This time when the second AI winter kicked in, which also effected research for neural networks and Deep Learning. Various overly-optimistic individuals had exaggerated the “immediate” potential of AI, breaking expectations, and angering investors. Luckily, some people continued to figure on AI and DL, and some significant advances were made. In 1995, Dana Cortes and Vladimir Vapnik developed the support vector machine.
Sepp Hochreiter and Jürgen Schmidhuber publish a milestone paper on “Long Short-Term Memory” (LSTM). It’s a kind of RNN architecture that will persist to revolutionize deep learning in decades to come.

In 2006

Geoffrey Hinton, Ruslan Salakhutdinov, Osindero, and the altogether published the paper a fast learning algorithm for deep belief nets. in which they stacked multiple databases together in layers and called them Deep Belief Networks. The training process is much more efficient for a large amount of data.

In 2008

Andrew NG’s group at Stanford started advocating for the utilization of GPUs. So, that they can train Deep Neural Networks to hurry up the training time by many folds. this could bring practicality in the field of Deep Learning for training on a large volume of data efficiently.

In 2009

Finding enough labeled data has always been a challenge for the Deep Learning community. In 2009 Fei-Fei Li, an AI professor at Stanford launched ImageNet, assembled a free database of more than 14 million labeled images. it would serve as a benchmark for the deep learning researchers who would participate in ImageNet competitions (ILSVRC) each year.

In 2012

AlexNet, a GPU implemented the CNN model designed by Alex Krizhevsky. AlexNet won Imagenet’s image classification contest with an accuracy of 84%. It's a huge jump over 75% accuracy that earlier models had achieved. This win triggers a new deep learning boom globally.

In 2014

Ian Goodfellow created GAN also known as Generative Adversarial Neural Network. GANs open an entirely new door of application of deep learning in fashion, art, science, etc.

In 2016

Deepmind’s deep reinforcement learning model beats the human champion in the complex game of Go. the game is much more complex than chess. As a result, this feat captures the imagination of everyone. Also, it takes the promise of deep learning to an entirely new level. Self-learning computer eclipses human ability at complex game Go In 2019
Yoshua Bengio, Geoffrey Hinton, and Yann LeCun won Turing Award 2018. they had immensely contributed to advancements in the area of deep learning and AI. This was a defining moment for those who had worked relentlessly on neural networks.2018 Turing Award

By 2012, deep learning had already been used to help people turn left at Albuquerque (Google Street View). It inquired about the estimated average airspeed velocity of an unladen swallow (Apple’s Siri). In June of the year 2012, Google linked 16,000 computer processors, gave them Internet access, and watched as the machines taught themselves how to identify…cats. What may seem laughably simplistic, though, was quite earth-shattering as scientific progress goes.

Advantages of Deep Learning

The following are the benefits or advantages of Deep Learning:

  Features are automatically deduced and optimally tuned for the desired outcome. Features are not required to be extracted ahead of time. This avoids time-consuming machine learning techniques.
  Robustness to natural variations in the data is automatically learned.
  The same neural network-based approach can be applied to many different applications and data types.
  Massive parallel computations can be performed using GPUs and are scalable for large volumes of data. Moreover, it delivers better performance results when the amount of data are huge.
  The deep learning architecture is flexible to be adapted to new problems in the future.

Disadvantages of Deep Learning

The following are the drawbacks or disadvantages of Deep Learning:

 It requires a very large amount of data to perform better than other techniques.
 It is extremely expensive to train due to complex data models. Moreover, deep learning requires expensive GPUs and hundreds of machines. This increases the cost to the users.
 There is no standard theory to guide you in selecting the right deep learning tools as it requires knowledge of topology, training method, and other parameters. As a result, it is difficult to be adopted by less skilled people.
 It is not easy to comprehend output based on mere learning and requires classifiers to do so. Convolutional neural network-based algorithms perform such tasks.

If you find something interesting then read the full article - Deep Learning



Submitted September 29, 2020 at 09:04AM by Geeky_4_Tech https://ift.tt/2S8V9hy via TikTokTikk

No comments:

Post a Comment