简体   繁体   中英

Why theta*X not theta'*X in practical?

While doing MOOC on ML by Andrew Ng, he in theory explains theta'*X gives us hypothesis and while doing coursework we use theta*X . Why it's so?

theta'*X is used to calculate the hypothesis for a single training example when X is a vector . Then you have to calculate theta' to get to the h(x) definition.

In the practice, since you have more than one training example , X is a Matrix ( your training set ) with "mxn" dimension where m is the number of your training examples and n your number of features .

Now, you want to calculate h(x) for all your training examples with your theta parameter in just one move right?

Here is the trick : theta has to be anx 1 vector then when you do Matrix-Vector Multiplication (X*theta) you will obtain an mx 1 vector with all your h(x)'s training examples in your training set (X matrix). Matrix multiplication will create the vector h(x) row by row making the corresponding math and this will be equal to the h(x) definition at each training example.

You can do the math by hand, I did it and now is clear. Hope i can help someone. :)

In mathematics , a 'vector' is always defined as a vertically-stacked array, eg, and signifies a single point in a 3-dimensional space.

A 'horizontal' vector, typically signifies an array of observations, eg is a tuple of 3 scalar observations.

Equally, a matrix can be thought of as a collection of vectors. Eg, the following is a collection of four 3-dimensional vectors:

A scalar can be thought of as a matrix of size 1x1, and therefore its transpose is the same as the original.

More generally, an n-by-m matrix W can also be thought of as a transformation from an m-dimensional vector x to an n-dimensional vector y , since multiplying that matrix with an m-dimensional vector will yield a new n-dimensional one. If your 'matrix' W is '1xn', then this denotes a transformation from an n-dimensional vector to a scalar.

Therefore, notationally, it is customary to introduce the problem from the mathematical notation point of view, eg y = Wx .

However, for computational reasons, sometimes it makes more sense to perform the calculation as a "vector times a matrix" rather than "matrix times a vector". Since (Wx)' === x'W' , sometimes we solve the problem like that, and treat x' as a horizontal vector. Also, if W is not a matrix, but a scalar, then Wx denotes scalar multiplication, and therefore in this case Wx === xW .

I don't know the exercises you speak of, but my assumption would be that in the course he introduced theta as a proper, vertical vector, but then transposed it to perform proper calculations, ie a transformation from a vector of n-dimensions to a scalar (which is your prediction).

Then in the exercises, presumably you were either dealing with a scalar 'theta' so there was no point transposing it, and was left as theta for convenience or , theta was now defined as a horizontal (ie transposed) vector to begin with for some reason (eg printing convenience), and then was left in that state when performing the necessary transformation.

I don't know what the dimensions for your theta and X are (you haven't provided anything) but actually it all depends on the X , theta and hypothesis dimensions. Let's say m is the number of features and n - the number of examples. Then, if theta is a mx1 vector and X is a nxm matrix then X*theta is a nx1 hypothesis vector.

But you will get the same result if calculate theta'*X . You can also get the same result with theta*X if theta is 1xm and X - mxn

Edit:

As @Tasos Papastylianou pointed out the same result will be obtained if X is mxn then (theta.'*X).' or X.'*theta are answers. If the hypothesis should be a 1xn vector then theta.'*X is an answer. If theta is 1xm , X - mxn and the hypothesis is 1xn then theta*X is also a correct answer.

i had the same problem for me. (ML course, linear regression) after spending time on it, here is how i see it: there is a confusion between the x(i) vector and the X matrix.

About the hypothesis h(xi) for a xi vector (xi belongs to R3x1), theta belongs to R3x1 theta = [to;t1;t2] #R(3x1) theta' = [to t1 t2] #R(1x3) xi = [1 ; xi1 ; xi2] #(R3x1) theta' * xi => to + t1.xi,1 +t2.xi,2 theta = [to;t1;t2] #R(3x1) theta' = [to t1 t2] #R(1x3) xi = [1 ; xi1 ; xi2] #(R3x1) theta' * xi => to + t1.xi,1 +t2.xi,2

= h(xi) (which is a R1x1 => a real number)

to the theta'*xi works here

About the vectorization equation in this case X is not the same thing as x (vector). it is a matrix with m rows and n+1 col (m =number of examples and n number of features on which we add the to term)

therefore from the previous example with n= 2, the matrix X is amx 3 matrix X = [1 xo,1 xo,2 ; 1 x1,1 x1,2 ; .... ; 1 xi,1 xi,2 ; ...; 1 xm,1 xm,2]

if you want to vectorize the equation for the algorithm, you need to consider for that for each row i, you will have the h(xi) (a real number) so you need to implement X * theta

that will give you for each row i [ 1 xi,1 xi,2] * [to ; t1 ; t2] = to + t1.xi,1 + t2.xi,2 [ 1 xi,1 xi,2] * [to ; t1 ; t2] = to + t1.xi,1 + t2.xi,2

Hope it helps

I have used octave notation and syntax for writing matrices: 'comma' for separating column items, 'semicolon' for separating row items and 'single quote' for Transpose.

In the under discussion, theta = [theta 0 ;theta = [theta 0 ; theta 1 ; theta 2 ; theta 3 ; .... theta f ].

'theta' is therefore a column vector or '(f+1) x 1' matrix. Here 'f' is the number of features. theta 0 is the intercept term.

With just one training example, x is a '(f+1) x 1' matrix or a column vector. Specifically x = [x 0 ; x 1 ; x 2 ; x 3 ; .... x f ] x 0 is always '1'.

In this special case the '1 x (f+1)' matrix formed by taking theta and x could be multiplied to give the correct '1x1' hypothesis matrix or a real number.和 x 形成的 '1 x (f+1)' 矩阵可以相乘以给出正确的 '1x1' 假设矩阵或实数。

h = theta' * x is a valid expression.

But the deals with multiple training examples.涉及多个训练示例。 If there are 'm' training examples, is a 'mx (f+1)' matrix.是一个 'mx (f+1)' 矩阵。

To simplify, let there be two training examples each with 'f' features.

= [ x 1 ; = [ x 1 ; x 2 ].

(Please note 1 and 2 inside the brackets are not exponential terms but indexes for the training examples).

Here, x 1 = [ x 0 1 , x 1 1 , x 2 1 , x 3 1 , .... x f 1 ] and x 2 = [ x 0 2 , x 1 2 , x 2 2 , x 3 2 , .... x f 2 ].

So X is a '2 x (f+1)' matrix.

Now to answer the question, theta is a '1 x (f+1)' matrix and X is a '2 x (f+1)' matrix.是一个 '1 x (f+1)' 矩阵, X 是一个 '2 x (f+1)' 矩阵。 With this, the following expressions are not valid.

  1. theta' * X
  2. theta * X

The expected hypothesis matrix, ' ', should have two predicted values (two real numbers), one for each of the two training examples. ”应该有两个预测值(两个实数),两个训练示例中的每一个都有一个。 ' ' is a '2 x 1' matrix or column vector. ' 是一个 '2 x 1' 矩阵或列向量。

The hypothesis can be obtained only by using the expression, X * theta which is valid and algebraically correct. Multiplying a '2 x (f+1)' matrix with a '(f+1) x 1' matrix resulting in a '2 x 1' hypothesis matrix.

This is because the computer has the coordinate (0,0) positioned on the top left, while geometry has the coordinate (0,0) positioned on the bottom left.

enter image description here

When Andrew Ng first introduced x in the cost function J(theta), x is a column vector aka

[x0; x1; ... ; xn]

i.e. 

x0;
x1;
...;
xn

However, in the first programming assignment, we are given X, which is an (m * n) matrix, (# training examples * features per training example). The discrepancy comes with the fact that from file the individual x vectors(training samples) are stored as horizontal row vectors rather than the vertical column vectors!!

This means the X matrix you see is actually an X' (X Transpose) matrix!!

Since we have X', we need to make our code work given our equation is looking for h(theta) = theta' * X(when the vectors in matrix X are column vectors)

we have the linear algebra identity for matrix and vector multiplication:

(A*B)' == (B') * (A') as shown here Properties of Transposes

let t = theta,
given, h(t) = t' * X
h(t)' = (t' X)'
= X' * t

Now we have our variables in the format they were actually given to us. What I mean is that our input file really holds X' and theta is normal, so multiplying them in the order specified above will give a practically equivilant output to that he taught us to use which was theta' * X. Since we are summing all the elements of h(t)' at the end it doesn't matter that it is transposed for the final calculation. However, if you wanted h(t), not h(t)', you could always take your computed result and transpose it because

(A')' == A

However, for the coursera machine learning programming assignment 1, this is unnecessary.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM