简体   繁体   English

图表断开:无法在“input_5”层获取张量 Tensor(“input_5:0”, shape=(None, None, None, 128), dtype=float32) 的值

[英]Graph disconnected: cannot obtain value for tensor Tensor(“input_5:0”, shape=(None, None, None, 128), dtype=float32) at layer “input_5”

I am trying to implement a tensorflow model (encoder decoder like) in which I train initially with a small number of layers, and append the model with more layers after training. I am trying to implement a tensorflow model (encoder decoder like) in which I train initially with a small number of layers, and append the model with more layers after training. I thought it would be easiest to create the layers as Models as I intend on setting various layers to trainable = False at points and thought it'd be easiest this way.我认为将图层创建为模型是最简单的,因为我打算将各个图层设置为 trainable = False,并且认为这样最简单。

The following code is a simple demonstration of an error I'm getting.以下代码是我遇到的错误的简单演示。


import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.models import Model
from tensorflow.keras.layers import concatenate, Input
from tensorflow.keras.layers import MaxPool2D, UpSampling2D, ReLU
from tensorflow.keras.layers import BatchNormalization


def conv_block(x, filters, kernel_size=(3,3), padding="same", strides=1):
    c = Conv2D(filters, kernel_size, padding=padding, strides=strides)(x)
    c = ReLU()(c) 
    c=BatchNormalization()(c)
    c = Conv2D(filters, kernel_size, padding=padding, strides=strides)(c)
    c = ReLU()(c) 
    c=BatchNormalization()(c)
    return c

    
def down_block(x, filters, kernel_size=(3,3), padding="same", strides=1):
    c = conv_block(x, filters, kernel_size = kernel_size,
                   padding = padding, strides = strides)
    p = MaxPool2D((2,2))(c) 
    return c,p


def up_block(x, skip, filters, kernel_size=(3,3), padding="same", strides=1):
    us = UpSampling2D((2,2))(x)
    concat = concatenate([us, skip])
    c = conv_block(concat, filters, kernel_size = kernel_size,
                   padding = padding, strides = strides)
    return c


def create_base_model():
    inner_input = Input((None,None,128))
    bn = conv_block(inner_input,128)
    inner_model = Model(inputs=inner_input,outputs=bn)
    return inner_model


def create_downblock_model():
    model_input = Input((None,None,128))
    c,p = down_block(model_input, 128)
    down_model = Model(inputs = model_input, outputs = [c,p])
    return down_model

def create_upblock_model():
    input_u = Input((None,None,128))
    input_c = Input((None,None,128))
    u = up_block(input_u, input_c, 128)
    up_model = Model(inputs=[input_u,input_c], outputs = u)
    return up_model



bn_model = create_base_model()


# 1ST METHOD - This works
down_model1 = create_downblock_model()
up_model1 = create_upblock_model()

x = bn_model(down_model1.output[-1])
x = up_model1([x,down_model1.output[0]])
inner_model = Model(inputs=down_model1.input, outputs=x)



# 2ND METHOD - This doesn't work
down_model2 = create_downblock_model()
up_model2 = create_upblock_model()


x = down_model2(down_model1.output[-1])
x = bn_model(x[-1])
x = up_model2([x,down_model2.output[0]])
x = up_model1([x,down_model1.output[0]])
inner_model = Model(inputs=down_model1.input, outputs=x)

gets the following error for the second method.获取第二种方法的以下错误。

Graph disconnected: cannot obtain value for tensor Tensor("input_5:0", shape=(None, None, None, 128), dtype=float32) at layer "input_5". The following previous layers were accessed without issue: ['input_2', 'conv2d_2', 're_lu_2']

Now down_model2 has the layer input_5:0, so I am assuming the issue is with the line x = down_model2(down_model1.output[-1]) .现在 down_model2 有层 input_5:0,所以我假设问题出在行x = down_model2(down_model1.output[-1])上。 I searched around and topics with a similar error would suggest that maybe the fact that: down_model1.output[-1] isn't an input layer is the issue, however I really don't understand why my method one works completely fine, but when I try to incorporate 2 downblocks, the same way of doing things fails?我四处搜索,出现类似错误的主题可能表明: down_model1.output[-1] is not an input layer 是问题所在,但是我真的不明白为什么我的方法一工作得很好,但是当我尝试合并 2 个下块时,相同的做事方式会失败吗? In my 1st method, I use down_block1.output[-1] as input when defining a new model fine, however it doesn't work in the second method?在我的第一种方法中,我在定义新的 model 时使用down_block1.output[-1]作为输入,但是它在第二种方法中不起作用?

I'm using tensorflow2.1.我正在使用 tensorflow2.1。 Apologies if I'm overlooking something simple but I can't understand why this isn't working.抱歉,如果我忽略了一些简单的事情,但我不明白为什么这不起作用。 Cheers干杯

The problem is cause by x = up_model2([x,down_model2.output[0]]) at third-to-last line probably due to wrong repeated reference, you need change the last block of code to:问题是由倒数第三行的x = up_model2([x,down_model2.output[0]])引起的,可能是由于重复引用错误,您需要将最后一段代码更改为:

down_model2_output = down_model2(down_model1.output[-1])
x = bn_model(down_model2_output[-1])
x = up_model2([x,down_model2_output[0]])
x = up_model1([x,down_model1.output[0]])
inner_model = Model(inputs=down_model1.input, outputs=x)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 图断开连接:无法在“input_1”层获取张量 Tensor("input_1:0", shape=(None, 299, 299, 3), dtype=float32) 的值 - Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(None, 299, 299, 3), dtype=float32) at layer "input_1" 图断开:无法获得张量 Tensor("conv2d_1_input:0", shape=(?, 128, 128, 1), dtype=float32) 的值 - Graph disconnected: cannot obtain value for tensor Tensor("conv2d_1_input:0", shape=(?, 128, 128, 1), dtype=float32) 警告:tensorflow:模型是用形状 (20, 37, 42) 构建的,用于输入 Tensor(“input_5:0”, shape=(20, 37, 42), dtype=float32),但是 - WARNING:tensorflow:Model was constructed with shape (20, 37, 42) for input Tensor(“input_5:0”, shape=(20, 37, 42), dtype=float32), but TypeError:添加的层必须是 class 层的实例。 找到:Tensor(“input_1:0”, shape=(None, 64, 64, 3), dtype=float32) -Python - TypeError: The added layer must be an instance of class Layer. Found: Tensor(“input_1:0”, shape=(None, 64, 64, 3), dtype=float32) -Python 错误:断言错误:无法计算输出张量(“dense_2/truediv:0”,形状=(无,无,1),dtype=float32) - Error: AssertionError: Could not compute output Tensor(“dense_2/truediv:0”, shape=(None, None, 1), dtype=float32) ValueError:收到呼叫 arguments: • inputs=tf.Tensor(shape=(None, 1), dtype=float32) • training=None - ValueError : Call arguments received: • inputs=tf.Tensor(shape=(None, 1), dtype=float32) • training=None 类型错误:添加的图层必须是类图层的实例。 找到:Tensor("concatenate_6/concat:0", shape=(None, 4608), dtype=float32) - TypeError: The added layer must be an instance of class Layer. Found: Tensor("concatenate_6/concat:0", shape=(None, 4608), dtype=float32) AssertionError:无法计算 output 张量(“softmax_layer/Identity:0”,shape=(None, 27, 8870),dtype=float32) - AssertionError: Could not compute output Tensor(“softmax_layer/Identity:0”, shape=(None, 27, 8870), dtype=float32) AssertionError:无法计算 output 张量(“dense_17/Sigmoid:0”,shape=(None,1),dtype=float32) - AssertionError: Could not compute output Tensor(“dense_17/Sigmoid:0”, shape=(None, 1), dtype=float32) 类型错误:添加的图层必须是类图层的实例。 找到:Tensor("input_2:0", shape=(?, 22), dtype=float32) - TypeError: The added layer must be an instance of class Layer. Found: Tensor("input_2:0", shape=(?, 22), dtype=float32)
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM