Make More Tensorflow Version (part 1)
I recently started watching Andrej Karpathy’s make more youtube video series (I know I’m a year late to the party). In this and the following posts, I plan to reproduce his experiments and models but using tensorflow.
Overview of the problem
Andrej operates on this names dataset which he downloaded from some site. And the task is to train a model that creates names that sound like real names (in the dataset) but are not real. And the way he goes about it in the first lecture is by creating a model that is purely statistical. Each name is made of a set of characters. Let’s say we learn the distribution of which character is likely to follow which character (bigram model), we can simply sample from that distribution to get the next character given the current character. That’s easy enough. How do we know when to stop? That’s where the special start and end characters come into play. Take the first name in this dataset emma. Let’s add a start and end character to it: .emma.
. Here is some pseudo code on how to generate the names (given we have the distribution):
Creating a vocabulary
This part is pretty easy. In our dataset, we only have lower case english characters (but this can easily be extended to non-english characters as well). Assume our dataset is in a file called names.txt
, here is how we create the vocabulary. Why do we map characters to numbers? This will be clear in the next step.
How to learn the distribution?
Typically this is done by counting the number of times a character appears after another character. Think of this as a 2D matrix where each row corresponds to a prev_token and each column corresponds to then next_token. Our matrix will have 27 elements (1 extra for the special .
token). To index into this matrix, it is better to use integers (sure we could have used dicts of dicts but it is not as efficient especially for a small dataset as this).
Visualizing the counts
Andrej had a cool way to visualize this distribution which I’m going to directly copy-paste. It is a cool way to plot this:
Normalizing the counts
To sample based on the distribution, it is better to have probabilities instead of counts. See my earlier post on random pick weight for some detailed explanation. Here I will use some tensorflow functions to do the same thing.
First, we convert the counts into a tensor of floating point numbers. Next, we normalize each row to sum up to 1. This can be done by taking the each value in a row and dividing it by the sum of all the columns in the row but tensorflow can do it with a single call to tf.math.reduce_sum
. Watch Andrej’s lecture to see why we need keepdims=True
(he does a great job explaining that).
Writing the ‘model’ (LOL)
How do we convert this matrix into a model? It is the model. We just need to wrap this into a form we can call directly. This can be done by using Python’s __call__
method.
Putting it all together
I set the random seed so you can try reproducing the results that I am getting on your machine. Here are the 5 names I got (even more terrible than the pytorch version that Andrej trained).
I haven’t figured out a way to set the same random seed on both pytorch and tensorflow to compare the two results. But if you get rid of tensorflow and pytorch sampling code and used the numpy version instead, we can get similar results. So, there is nothing wrong with the code.
Next steps
Feel free to play around with the code with different datasets (perhaps some names with non-english characters as well). In the next post, we will implement the same terrible model using a neural network.