Deep Learning: Recurrent Neural Networks in Python
GRU, LSTM, + more modern deep learning, machine learning, and data science for sequences
What you’ll learn
 Understand the simple recurrent unit (Elman unit)

Understand the GRU (gated recurrent unit)

Understand the LSTM (long shortterm memory unit)
 Write various recurrent networks in Theano
 Understand backpropagation through time
 Understand how to mitigate the vanishing gradient problem
 Solve the XOR and parity problems using a recurrent neural network
 Use recurrent neural networks for language modeling
 Use RNNs for generating text, like poetry
 Visualize word embeddings and look for patterns in word vector representations
 Calculus
 Linear algebra
 Python, Numpy, Matplotlib
 Write a neural network in Theano
 Understand backpropagation
 Probability (conditional and joint distributions)
 Write a neural network in Tensorflow
Description
Like the course I just released on Hidden Markov Models, Recurrent Neural Networks are all about learning sequences – but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not – and as a result, they are more expressive, and more powerful than anything we’ve seen on tasks that we haven’t made progress on in decades.
So what’s going to be in this course and how will it build on the previous neural network courses and Hidden Markov Models?
In the first section of the course we are going to add the concept of time to our neural networks.
I’ll introduce you to the Simple Recurrent Unit, also known as the Elman unit.
We are going to revisit the XOR problem, but we’re going to extend it so that it becomes the parity problem – you’ll see that regular feedforward neural networks will have trouble solving this problem but recurrent networks will work because the key is to treat the input as a sequence.
In the next section of the course, we are going to revisit one of the most popular applications of recurrent neural networks – language modeling.
You saw when we studied Markov Models that we could do things like generate poetry and it didn’t look too bad. We could even discriminate between 2 different poets just from the sequence of partsofspeech tags they used.
In this course, we are going to extend our language model so that it no longer makes the Markov assumption.
Another popular application of neural networks for language is word vectors or word embeddings. The most common technique for this is called Word2Vec, but I’ll show you how recurrent neural networks can also be used for creating word vectors.
In the section after, we’ll look at the very popular LSTM, or long shortterm memory unit, and the more modern and efficient GRU, or gated recurrent unit, which has been proven to yield comparable performance.
We’ll apply these to some more practical problems, such as learning a language model from Wikipedia data and visualizing the word embeddings we get as a result.
All of the materials required for this course can be downloaded and installed for FREE. We will do most of our work in Numpy, Matplotlib, and Theano. I am always available to answer your questions and help you along your data science journey.
This course focuses on “how to build and understand“, not just “how to use”. Anyone can learn to use an API in 15 minutes after reading some documentation. It’s not about “remembering facts”, it’s about “seeing for yourself” via experimentation. It will teach you how to visualize what’s happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you.
See you in class!
NOTES:
All the code for this course can be downloaded from my github: /lazyprogrammer/machine_learning_examples
In the directory: rnn_class
Make sure you always “git pull” so you have the latest version!
HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:
 calculus
 linear algebra
 probability (conditional and joint distributions)
 Python coding: if/else, loops, lists, dicts, sets
 Numpy coding: matrix and vector operations, loading a CSV file
 Deep learning: backpropagation, XOR problem
 Can write a neural network in Theano and Tensorflow
TIPS (for getting through the course):
 Watch it at 2x.
 Take handwritten notes. This will drastically increase your ability to retain the information.
 Write down the equations. If you don’t, I guarantee it will just look like gibberish.
 Ask lots of questions on the discussion board. The more the better!
 Realize that most exercises will take you days or weeks to complete.
 Write code yourself, don’t just sit there and look at my code.
WHAT ORDER SHOULD I TAKE YOUR COURSES IN?:
 Check out the lecture “What order should I take your courses in?” (available in the Appendix of any of my courses, including the free Numpy course)
 If you want to level up with deep learning, take this course.
 If you are a student or professional who wants to apply deep learning to time series or sequence data, take this course.
 If you want to learn about word embeddings and language modeling, take this course.
 If you want to improve the performance you got with Hidden Markov Models, take this course.
 If you’re interested the techniques that led to new developments in machine translation, take this course.
 If you have no idea about deep learning, don’t take this course, take the prerequisites.
Created By  Lazy Programmer Inc. 
Last Updated  10/2018 
Language  English 
Size 
1.36 GB 
https://www.udemy.com/deeplearningrecurrentneuralnetworksinpython/