Learning deep learning (project 2, image classification)

1 min read

In this class project, I built a network to classify images in the CIFAR-10 dataset. This dataset is freely available.

The dataset contains 60K color images (32×32 pixel) in 10 classes, with 6K images per class.

Here are the classes in the dataset, as well as 10 random images from each:

airplane
automobile
bird
cat
deer
dog
frog
horse
ship
truck

You can imagine it’s not possible to write down all rules to classify them, so we have to write a program which can learn.

The neural network I created contains 2 hidden layers. The first one is a convolutional layer with max pooling. Then drop out 70% of the connections. The second layer is a fully connected layer with 384 neurons.

def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    model = conv2d_maxpool(x, conv_num_outputs=18, conv_ksize=(4,4), conv_strides=(1,1), pool_ksize=(8,8), pool_strides=(1,1))
    model = tf.nn.dropout(model, keep_prob)

    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    model = flatten(model)

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    model = fully_conn(model,384)

    model = tf.nn.dropout(model, keep_prob)

    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    model = output(model,10)

    # TODO: return output
    return model

Then I trained this network using Amazon AWS g2.2xlarge instance. This instance has GPU which is much faster for deep learning (than CPU). I did a simple experiment and find GPU is at least 3 times faster than CPU:

if all layers in gpu: 14 seconds to run 4 epochs,
if conv layer in cpu, other gpu, 36 seconds to run 4 epochs

This is apparently a very crude comparison but GPU is definitely much faster than CPU (at least the ones in AWS g2.2xlarge, cost: $0.65/hour)

Eventually I got ~70% accuracy on the test data, much better than random guess (10%). The time to train the model is ~30 minutes.

You can find my entire code at:
https://www.alivelearn.net/deeplearning/dlnd_image_classification_submission2.html



写作助手,把中式英语变成专业英文


Want to receive new post notification? 有新文章通知我

第五十八期fNIRS Journal Club通知2024/12/07, 10am 王硕教授团队

理解噪音中的言语对老年听力损失患者来说是一个重大挑战。来自首都医科大学附属北京同仁医院耳鼻咽喉科研究所王硕教授团队的助理研究员王松建将为大家介绍他们采用同步EEG-fNIRS技术,从神经与血流动力学两
Wanling Zhu
10 sec read

第五十七期fNIRS Journal Club视频 王欣悦博士

Youtube: https://youtu.be/vyo-kECC2Ps 优酷:https://v.youku.com/v_show/id_XNjQzNTA0ODIwMA==.html 肢体语言——
Wanling Zhu
20 sec read

第五十七期fNIRS Journal Club通知2024/11/02, 10am 王欣悦博士

肢体语言——例如人际距离、眼神、手势等,如何影响我们的交流,是一个有趣的谜题。它们是优雅而神秘的代码,无本可依、无人知晓,却又无人不懂。来自南京师范大学的王欣悦博士将分享如何通过fNIRS超扫描技术,
Wanling Zhu
16 sec read

2 Replies to “Learning deep learning (project 2, image classification)”

  1. Helpful post. Can you explain your motivation behind using standard deviation on 0.1 while initializing the weights. My network does not learn if i keep the standard deviation to 1. Only when i saw your post and fine tuned my standard deviation to 0.1, it started training. i would like to understand how did you choose the standard deviation of 0.1 🙂

  2. Can you explain how you arrived at the values below?

    model = fully_conn(model,384)
    #model = fully_conn(model,200)
    #model = fully_conn(model,20)

Leave a Reply

Your email address will not be published. Required fields are marked *