简体   繁体   中英

How can you do an outer summation over only one dimension of a numpy 2D array?

I have a (square) 2 dimensional numpy array where I would like to compare (subtract) all of the values within each row to each other but not to other rows so the output should be a 3D array.

matrix = np.array([[10,1,32],[32,4,15],[6,3,1]])

Output should be a 3x3x3 array which looks like:

output = [[[0,-9,22],[0,-28,-17],[0,-3,-5]], [[9,0,31],[28,0,11],[3,0,-2]], [[-22,-31,0],[17,-11,0],[5,2,0]]]

Ie for output[0] , for each of the 3 rows of matrix , subtract that row's zeroth element from every other, for output[1] subtract each row's first element etc.

This seems to me like a reduced version of numpy's ufunc.outer functionality which should be possible with tryouter = np.subtract(matrix, matrix) and then taking some clever slice and/or transposition.

Indeed, if you do this, one finds that: output[i,j] = tryouter[i,j,i]

This looks like it should be solvable by using np.transpose to switch the 1 and 2 axes and then taking the arrays on the new 0,1 diagonal but I can't work out how to do this with numpy diagonal or any slicing method.

Is there a way to do this or is there a simpler approach to this whole problem built into numpy? Thanks:)

You're close, you can do it with broadcasting :

out = matrix[None, :, :] - matrix.T[:, :, None]

Here .T is the same as np.transpose , and using None as an index introduces a new dummy dimension of size 1.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM