简体   繁体   中英

How to treat 1D arrays as (1 by n) 2D arrays in numpy?

I apologize if this has been asked before, but I can't seem to find an answer or I am not searching for the answer correctly.

I am currently writing code in python using numpy and my function takes a input as a matrix. I want to view a 1D array as a (1 by n) 2D array.

Here is a minimal example of my issue. The following function takes input two matrices and adds the upper left element of the first matrix and adds it to the bottom right element of the second matrix.

import numpy as np


def add_corners(A, B):
    r = A[0, 0] + B[B.shape[0] - 1, B.shape[1] - 1]
    return r


C = np.array([[1, 2, 3], [4, 5, 6]])
D = np.array([[9, 8], [7, 6], [5, 4], [10, 11]])
E = np.array([1, 2, 3, 4, 5])

print(add_corners(C, D))
print(add_corners(C, E))

print(add_corners(C,E)) leads to an error, since E.shape[1] is not well defined. Is there a way to get around this without having to add an if statement to check if my input contains a 1D array? That is, I want to refer to the entries of E as E[1,x] as opposed to just E[x] .

Any help is greatly appreciated!

What you need is to add an additional dimension. You could either do:

add_corners(C, E[:, None])

or with np.expand_dims :

add_corners(C, np.expand_dims(E, -1))

Here's how it looks like:

>>> E
array([1, 2, 3, 4, 5])

>>> E[:, None]
array([[1],
       [2],
       [3],
       [4],
       [5]])

>>> E[:, None].shape
(5, 1)

For a much easier and convenient way you can use the flat iterator:

def add_corners(A, B):
    return A.flat[0] + B.flat[-1]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM