These slides give an introduction to the performance of Deep Learning.
1. Hello and welcome to this lecture on Deep Learning performance. We will find out when deep learning is a better choice than the classical machine learning algorithms and why deep learning has become so popular.
2. When is deep learning showing a higher performance than the other methods we covered up to this point? You can see a qualitative answer in the graph. It shows the model performance as a function of available training data. You can see that classical machine learning techniques work better if you have small datasets. But you can also see, that their performance reaches a threshold. The performance does not increase much, if you have more data available. With increasing size of the available training data neural networks outperform classical techniques. There is a difference between small and large neural networks. Small neural networks show earlier a better performance than classical techniques, but their model performance is not as good as with large neural networks. Large neural networks start being the best choice only after a significant amount of data is available, but then they can show their full potential and they outperform strongly other models.
3. This is also one answer to the question why Deep Learning improved so much –
a. More data. Only with recent years we have an overwhelming amount of data that can be analysed.
b. Another one is the hardware development. Moore’s law is still valid, which means hardware performance doubles still every 1 and a half year. This might not be the case for transistors on a chip, but for Deep Learning GPUs – graphics processing units – are more relevant.
c. Another reason is there are better algorithms. Neural networks were developed many decades ago, but they gained much popularity only after Google opened up the development of Tensorflow and made it open source. Since then many developer have adopted it and we see many breakthroughs.
d. The last reason I want to mention is Open Source. The deep learning frameworks are all open source, as well as the programming languages, like R or Python, with which you can create deep learning models.
2. When is deep learning showing a higher performance than the other methods we covered up to this point? You can see a qualitative answer in the graph. It shows the model performance as a function of available training data. You can see that classical machine learning techniques work better if you have small datasets. But you can also see, that their performance reaches a threshold. The performance does not increase much, if you have more data available. With increasing size of the available training data neural networks outperform classical techniques. There is a difference between small and large neural networks. Small neural networks show earlier a better performance than classical techniques, but their model performance is not as good as with large neural networks. Large neural networks start being the best choice only after a significant amount of data is available, but then they can show their full potential and they outperform strongly other models.
3. This is also one answer to the question why Deep Learning improved so much –
a. More data. Only with recent years we have an overwhelming amount of data that can be analysed.
b. Another one is the hardware development. Moore’s law is still valid, which means hardware performance doubles still every 1 and a half year. This might not be the case for transistors on a chip, but for Deep Learning GPUs – graphics processing units – are more relevant.
c. Another reason is there are better algorithms. Neural networks were developed many decades ago, but they gained much popularity only after Google opened up the development of Tensorflow and made it open source. Since then many developer have adopted it and we see many breakthroughs.
d. The last reason I want to mention is Open Source. The deep learning frameworks are all open source, as well as the programming languages, like R or Python, with which you can create deep learning models.