簡體   English   中英

Theano邏輯回歸維度不匹配

[英]Theano logistic regression dimension mismatch

我有以下代碼在theano中進行邏輯回歸,但我不斷收到維度不匹配錯誤:

inputs = [[0,0], [1,1], [0,1], [1,0]]
outputs = [0, 1, 0, 0]

x = T.dmatrix("x")
y = T.dvector("y")
b = theano.shared(value=1.0, name='b')

alpha = 0.01
training_steps = 30000

w_values = np.asarray(np.random.uniform(low=-1, high=1, size=(2, 1)), dtype=theano.config.floatX)
w = theano.shared(value=w_values, name='w', borrow=True)

hypothesis = T.nnet.sigmoid(T.dot(x, w) + b)
cost = T.sum((y - hypothesis) ** 2)
updates = [
    (w, w - alpha * T.grad(cost, wrt=w)),
    (b, b - alpha * T.grad(cost, wrt=b))
]

train = theano.function(inputs=[x, y], outputs=[hypothesis, cost], updates=updates)
test = theano.function(inputs=[x], outputs=[hypothesis])

# Training
cost_history = []

for i in range(training_steps):
    if (i+1) % 5000 == 0:
        print "Iteration #%s: " % str(i+1)
        print "Cost: %s" % str(cost)
    h, cost = train(inputs, outputs)
    cost_history.append(cost)

theano給出的錯誤是:

Input dimension mis-match. (input[0].shape[1] = 4, input[1].shape[1] = 1)
Apply node that caused the error: Elemwise{sub,no_inplace}(InplaceDimShuffle{x,0}.0, Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)].0)
Toposort index: 7
Inputs types: [TensorType(float64, row), TensorType(float64, matrix)]
Inputs shapes: [(1L, 4L), (4L, 1L)]
Inputs strides: [(32L, 8L), (8L, 8L)]
Inputs values: [array([[ 0.,  1.,  0.,  0.]]), array([[ 0.73105858],
       [ 0.70988924],
       [ 0.68095791],
       [ 0.75706749]])]

因此,問題似乎是y被視為1x4,而假設值是4x1,因此無法計算成本

我已嘗試將輸入重新整形為4x1:

outputs = np.array([0, 1, 0, 0]).reshape(4,1)

然后,它給了我另一個維度相關的錯誤:

('Bad input argument to theano function with name "F:/test.py:32" at index 1(0-based)', 'Wrong number of dimensions: expected 1, got 2 with shape (4L, 1L).')

因為在你的代碼中, hypothesis是一個形狀為n_sample * 1的矩陣。另一方面, y是一個向量。 發生維度不匹配。 你既可以壓扁hypothesis也可以重塑y 以下代碼有效。

inputs = [[0,0], [1,1], [0,1], [1,0]]
outputs = [0, 1, 0, 0]
outputs = np.asarray(outputs, dtype='int32').reshape((len(outputs), 1))

x = T.dmatrix("x")
# y = T.dvector("y")
y = T.dmatrix("y")
b = theano.shared(value=1.0, name='b')

alpha = 0.01
training_steps = 30000

w_values = np.asarray(np.random.uniform(low=-1, high=1, size=(2, 1)), dtype=theano.config.floatX)
w = theano.shared(value=w_values, name='w', borrow=True)

hypothesis = T.nnet.sigmoid(T.dot(x, w) + b)
# hypothesis = T.flatten(hypothesis)
cost = T.sum((y - hypothesis) ** 2)
updates = [
    (w, w - alpha * T.grad(cost, wrt=w)),
    (b, b - alpha * T.grad(cost, wrt=b))
]

train = theano.function(inputs=[x, y], outputs=[hypothesis, cost], updates=updates)
test = theano.function(inputs=[x], outputs=[hypothesis])

# Training
cost_history = []

for i in range(training_steps):
    if (i+1) % 5000 == 0:
        print "Iteration #%s: " % str(i+1)
        print "Cost: %s" % str(cost)
    h, cost = train(inputs, outputs)
    cost_history.append(cost)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM