Neural Networks have a lot of interconnected layers, each of them with an Activation Function. But why is this necessary? What are Activation Functions for?
Convolutional Neural Networks are a part of what made Deep Learning reach the headlines so often in the last decade. Today we’ll train a CNN to tell us whether an image contains a dog or a cat, using TensorFlow’s eager API.
Deep Learning has revolutionized the Machine Learning scene in the last years. Can we apply it to image compression? How well can a Deep Learning algorithm reconstruct pictures of kittens? What’s an autoencoder?