简体   繁体   中英

Google Colab is very slow compared to my PC

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here .

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here .

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here .

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here .

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here .

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here .

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here .

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here .

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.

In my case, the GPU on colab is super fast compared to my Nvidia GPU card on PC- based on training speeds. However, when doing simulations, which I can only presume involves CPU, my PC is nearly 50% faster (i7, 10th Gen)

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here .

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here .

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM