简体   繁体   English

keras 输入层限制

[英]keras input layer limits

example model from a book goes well.书中的示例 model 进展顺利。

model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu"))
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))

Though when I try to change input_shape from [28, 28] to [1920, 1080] or [1,] compiler says this:虽然当我尝试将 input_shape 从 [28, 28] 更改为 [1920, 1080] 或 [1,] 时,编译器会这样说:

File "C:\Users\User1\PycharmProjects\untitled3\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 6862, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "", line 3, in raise_from文件“C:\Users\User1\PycharmProjects\untitled3\venv\lib\site-packages\tensorflow\python\framework\ops.py”,第 6862 行,在 raise_from_not_ok_status Six.raise_from(core._status_to_exception(e.code, message ), 无) 文件“”,第 3 行,在 raise_from

tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[2073600,300] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [Op:Mul] tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM 分配张量时的形状为 [2073600,300] 并在 /job:localhost/replica:0/task:0/device:CPUul:0 by allocator 上键入 float ]

for me it looks like it loads tensor in processor, but it is not data, just model creation.对我来说,它看起来像是在处理器中加载张量,但它不是数据,只是 model 创建。 what problem can it be?会是什么问题?

This is why convolutional neural networks should be used when dealing with images.这就是为什么在处理图像时应该使用卷积神经网络。 Every pixel becomes a feature, and every pixel will be connected to every unit of your dense layer.每个像素都成为一个特征,每个像素都将连接到密集层的每个单元。

Let's say you have a 1920*1080 pictures and a dense layer with 300 units, that's 622,080,000 parameters just for your first layer, which will overload your GPU.假设您有一个1920*1080的图片和一个具有 300 个单位的密集层,即第一层的 622,080,000 个参数,这将使您的 GPU 过载。 Try using a convnet instead.尝试改用卷积网络

model = keras.Sequential(
    [
        keras.Input(shape=input_shape),
        layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Flatten(),
        layers.Dropout(0.5),
        layers.Dense(num_classes, activation="softmax"),
    ]
)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM