Deep learning training speed with 1080 Ti and M1200

1 min read

I compared the speed of Nvidia’s 1080 Ti on a desktop (Intel i5-3470 CPU, 3.2G Hz, 32G memory) and NVIDIA Quadro M1200 w/4GB GDDR5, 640 CUDA cores on a laptop (CPU: Intel Core i7-7920HQ (Quad Core 3.10GHz, 4.10GHz Turbo, 8MB 45W, Memory: 64G).

The code I used is Keras’ own example ( to classiy MNIST dataset:

MNIST dataset
MNIST dataset
'''Trains a simple convnet on the MNIST dataset.

Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.

from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K

batch_size = 128
num_classes = 10
epochs = 12

# input image dimensions
img_rows, img_cols = 28, 28

# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))

              metrics=['accuracy']), y_train, batch_size=batch_size, epochs=epochs,
          verbose=1, validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

The result is that 1080 Ti is 3 times faster than M1200:

M1200 (1 epoch) 1080 Ti (1 epoch)
18s 6s

Is AI able to write papers?

As researchers, we all have a dream: it would be fantastic if AI could write papers one day! This sounds like a science fiction,...
Xu Cui
3 min read


作为科研人员,我们都有个梦想,如果有一天AI可以帮我写论文就太好了!这听上去好像是遥远的科幻,不过,当我使用了文献鸟Stork的新高级功能“人工智能创意论文摘要”后,我觉得,这一天马上就要来了。 为了看看人工智能有多厉害,我随便想了个有意思的主题,就是人和宠物狗之间的欺骗行为的大脑机制。这是一个还没有被研究的课题,到现在为止还没有人同时测量过一个人和他的宠物狗的大脑,当然也没有文章发表。我有这个想法已经有一段时间,但还是个模糊的想法,具体怎么实现以及结果怎样我也不知道。于是,我看看人工智能会写出什么。于是我输入了: Deception between human and pet dog, an fNIRS hyperscanning study 点击“开始”按钮后,人工智能好像想了几秒钟,然后文字便源源不断地出现了。 The present study aimed to investigate the neural correlates of deception...
Xu Cui
1 min read

4 Replies to “Deep learning training speed with 1080 Ti and M1200”

Leave a Reply

Your email address will not be published. Required fields are marked *