An Example of Creating CNN with Tensorflow
-- Introduction of CNN
Convolutional Neural Networks (CNN), is a type of neural network in machine learning that is specialized in dealing with image recognition. It could find features that aren't in a specific spot such as a traffic light in a picture or words within a sentence.
CNN's are actually inspired by the biology of the visual cortex (Thank God!), in which local receptive fields are groups of neurons that only respond to a part of what you see. The overlapping effect will create the entire visual fields, which is similar to the effect of convolution.
A typical CNN network example will be like the picture in the top, which compost of few convolution layers to detect certain features and some pooling layer to reduce the computations in the network. In the end, we will use the softmax function to give the prediction probability to each candidate objects.
For the detail calculation of convolutional layer, here is an example by using the filter matrix
[ [ 0 0 1],
[ 1 0 0],
[ 1 1 0] ]
Note that different filtering matrix could detect different features of a given picture(data).
-- Create CNN for handwriting recognition
We will use MNIST dataset to train and test our CNN. The MNIST database is a large database of handwritten digits that is commonly used for training image processing systems. This data set is actually included inside of tensorflow so first let's import all the libraries we need for this task.
Since the images were flattened in to 1D of 784 pixels, we want to reshape the data into 2D of 28*28 pixels:
Note that we actually convert it into a [28, 28, 1] data, the 1 represents the color channels. Since our data is black and white, so we only have 1 color channel, otherwise, it will be 3 for any color picture.
Then we will convert our label into categorical in the one hot format:
After we pre-processed our data, we will, then, build our CNN model!
After we added 2 convolutional layers at second the third steps, we reduced our computational complexity by using a 2*2 max pooling layer and a random drop rate of 0.25.
In the end, we need to flatten our results into 1D and passing it into our final prediction layer, which is a dense layer of size 10 with a softmax activation function.
Then we compile our model with loss function and optimizer, note that keras actually saved us lots of time to build this CNN model.
We actually train our model with 10 epochs. The more epochs the better, but CNN is very computationally expansive to execute.
Finally, we could see that our model has over 99% accuracy! It is great, but it did take a lot of time to execute.
Reference:
https://www.mathworks.com/solutions/deep-learning/convolutional-neural-network.html
Happy Learning :_)
Ye Jiang
No comments:
Post a Comment