Deep Learning with TensorFlow
Set up your computing environment and install TensorFlow From the learning gained at this course, you will be able to build simple TensorFlow graphs for everyday computations, apply logistic regression for classification with TensorFlow, design and train a multilayer neural network with TensorFlow, and do lots more
- Self-paced with Life Time Access
- Certificate on Completion
- Access on Android and iOS App
Channel the power of deep learning with Google's TensorFlow!
With deep learning going mainstream for making sense of data, getting accurate results using deep networks is possible. This video is your guide to explore possibilities with deep learning. It will enable you to understand data like never before. With efficiency and simplicity of TensorFlow, you will be able to process your data and gain insights which would change how you look at data.
With this video, you will dig your teeth deeper into the hidden layers of abstraction using raw data. This video will offer you various complex algorithms for deep learning and various examples that use these deep neural networks. You will also learn how to train your machine to craft new features to make sense of deeper layers of data. During the video course, you will come across topics like logistic regression, convolutional neural networks, training deep networks, and so on. With the help of practical examples, the video will cover advanced multilayer networks, image recognition, and beyond.
This course uses TensorFlow 0.8 and Python 3.5, while not the latest version available, it provides relevant and informative content for legacy users of TensorFlow, and Python.
About The Author
- Dan Van Boxel is a Data Scientist and Machine Learning Engineer with over 10 years of experience. He is most well-known for "Dan Does Data," a YouTube livestream demonstrating the power and pitfalls of neural networks. He has developed and applied novel statistical models of machine learning to topics such as accounting for truck traffic on highways, travel time outlier detection, and other areas. Dan has also published research and presented findings at the Transportation Research Board and other academic journals.
- Some familiarity with C++ or Python is assumed
- Set up your computing environment and install TensorFlow
- Build simple TensorFlow graphs for everyday computations
- Apply logistic regression for classification with TensorFlow
- Design and train a multilayer neural network with TensorFlow
- Understand intuitively convolutional neural networks for image recognition
- Bootstrap a neural network from simple to more accurate models
- See how to use TensorFlow with other types of networks
- Program networks with SciKit-Flow, a high-level interface to TensorFlow
TensorFlow is a new machine library that is probably not installed on our operating system by default. This video will guide you through installing TensorFlow locally or remotely.
- Note TensorFlow as a new machine learning library with documentation
- Install TensorFlow locally via pip
- Discuss how SageMathCloud supports TensorFlow
Before we can use TensorFlow for deep learning, we need to understand how TensorFlow handles basic objects and operations. This video will walk you through a few computations.
- Understand TensorFlow objects as multidimensional “tensors”
- Describe TensorFlow computation as specifying a graph
- Execute all or part of a graph using a “session”
Learning any library from documentation can be challenging, so we're going to build a practical machine learning classifier with TensorFlow. We'll start with a simple logistic regression classifier and build up from there.
- We will classify images of characters by font
- Logistic regression is a simple way to “score” possible fonts
- We will quickly code the entire classifier in TensorFlow
Though we have a classifier, we need to compute weights so that our model is accurate. For this, we can use TensorFlow to specify and optimize a loss function. TensorFlow will then use this to find good weights.
- Build a TensorFlow “categorical cross-entropy” function to optimize weights
- Train the model with gradient descent over many epochs
- Evaluate model accuracy and weights
Using single pixels as features limits us to model essentially linear phenomena. To model non-linear things such as font styles involving several pixels, we will use neural networks to transform our inputs into non-linear combinations for use in a logistic regression classifier.
- Explain simple non-linear transformations of values with activation functions
- Implement a single neuron in TensorFlow
- Describe the structural nature of a neural network
Now that we understand neural networks, we can see how TensorFlow makes them easy to implement and train.
- Implement the neural net in TensorFlow
- Briefly describe how backpropagation trains the weights
- Use TensorFlow to train the neural net
Now that we've trained a neural network, we should inspect it closely to understand the accuracy and weights.
- Let's examine the test accuracy over many epochs of training
- Create a confusion matrix to understand model errors
- Inspect and visualize the learned weights
A single hidden layer is good, but you may find the number of neurons growing prohibitive in order to model very complex features. To combine features more easily, we expand the network in depth rather than width. It's true deep learning with multiple hidden layers.
- Instead of adding more neurons, let's add a new layer.
- Deciding the number of neurons and layers is a hard experiment!
- Train the deep neural network. This may take some time.
With our deep neural net trained, we should take time to check its accuracy and understand the features it's extracting.
- Verify model accuracy on training and testing data
- Visualize and understand the weights of the input layer
- Visualize and understand the weights of the output layer
Particularly in images, the features that we want to find can occur anywhere among the pixels. Convolutional neural nets allow us to train one set of weights to search small windows of an image for a feature.
- Understand convolution as sliding windows with one set of weights
- Sets of weights allow pulling of many features from one window
- Handle multiple pixel channels (like RGB colors)
Understanding the theory of convolutional layers is useless without learning the tools to actually use them. This video walks through a simple example with TensorFlow.
- Describe an example setup and unusual shapes of variables
- Implement weights and call TensorFlow convolution code
- Evaluate sample node and observe the effect of convolution
Convolutions can find a feature anywhere in an image, but with all the overlap, we need to make sure we don't find the same feature in the same place multiple times. A pooling layer reduces the size of our input, taking only relevant information.
- Max pooling is a sliding window without overlap
- Poolings usually don't consider partial windows
- Combine convolutional and pooling layers for a maximum effect
Having learned how Max Pooling works in theory, it's time to put it into practice by adding it to our simple example in TensorFlow.
- Tf.maxpool is the core code call
- For a final pooling layer, we should flatten the output.
- Confirm that 2x2 max pooling has cut the shape into half
We've learned about convolutional layers and used them in an example, but now let's use them for real by adding a convolutional layer to the font classification model.
- Setup window size and weights with correct shape
- Implement convolution and pooling layers. Flatten the parameters.
- Evaluate the model accuracy
Convolutional layers often work well when chained together. Let's add another to our font classification model.
- Set up a new convolutional layer
- Carefully transition from convolutional to dense layer
- Describe dropout training to avoid overfitting
After our deep model has trained for a model, it's time to see how well it performs.
- Note that there are no dropouts during testing
- Observe the model accuracy
- Understand convolutional weights as small image features
Some problems have time-based inputs. Features from the recent past might matter to the current prediction. To address these, researchers have developed recurrent neural networks. TensorFlow natively supports these.
- Explain the background of recurrent neural networks
- Describe the example problem of predicting the season from the weather
- Implement a season predictor in TensorFlow with a recurrent neural network
TensorFlow models can be cumbersome to specify, yet follow a common pattern. skflow provides a simple interface for typical models.
- Introduce skflow as sklearn for TensorFlow
- Build a simple font classification model with skflow
- Build a custom font classification model with skflow
RNNs can be hard to specify, but skflow will let us quickly build a model.
- Reshape data for skflow RNN
- One-line model specification
- Train and evaluate
After learning so many methods and building all these models, it's helpful to look back and see how far we've come.
- Revisit font image data
- Review simple and densely connected models
- Review convolutional models
TensorFlow is changing very quickly and is being adopted by more researchers and professionals. But at its core are the contributions submitted by new users.
- Note the fast pace of development in TensorFlow
- Describe the generality of TensorFlow
- Encourage viewers to contribute to and improve TensorFlow