[英]Google Colab GPU is not available (tensorflow & keras tf-models errors)
几天前,我使用 Google Colab Pro 写了一个 BERT Model 用于文本分类。 一切正常,但从昨天开始,我总是收到 output“GPU 不可用”。 我没有更改任何内容,但注意到在安装 tensorflow_hub 和 keras tf-models 时发生错误。 之前没有任何错误。
! python --version
!pip install tensorflow_hub
!pip install keras tf-models-official pydot graphviz
我收到这条消息:
错误:tensorflow 2.5.0 要求 h5py~=3.1.0,但您将拥有不兼容的 h5py 2.10.0。
错误:tf-models-official 2.5.0 要求 pyyaml>=5.1,但您将拥有不兼容的 pyyaml 3.13。
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from keras.utils import np_utils
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization as tokenization
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import matplotlib.pyplot as plt
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
print(e)
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
output 版本:2.5.0 渴望模式:True Hub 版本:0.12.0 GPU 不可用
如果有人可以帮助我,我将不胜感激。
ps.:我已经尝试更新 h5py 和 PyYAML,但是 GPU 仍然没有运行。
! pip install h5py==3.1.0
! pip install PyYAML==5.1.2
错误:tf-models-official 2.5.0 要求 pyyaml>=5.1,但您将拥有不兼容的 pyyaml 3.13。
我能够通过在安装tf-models-official
之前升级pip
package 来解决上述问题,如下所示
!pip install --upgrade pip
!pip install keras tf-models-official pydot graphviz
工作代码如下图
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from keras.utils import np_utils
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization as tokenization
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import matplotlib.pyplot as plt
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
print(e)
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
Output:
1 Physical GPUs, 1 Logical GPUs
Version: 2.5.0
Eager mode: True
Hub version: 0.12.0
GPU is available
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.