简体   繁体   中英

Vectorizing numpy calculation without a tensor dot product

I would like to vectorize a particular case of the following mathematical formula (from Table 2 and Appendix A of this paper ) with numpy :

附录 A

The case I would like to compute is the following, where the scaling factors under the square root can be ignored.

表 2

The term w_kij - w_ij_bar is a n x p x p matrix, where n is typically much greater than p .

I implemented 2 solutions neither of which are particularly good: one involves a double loop, while the other fills the memory with unnecessary calculations very quickly.

dummy_data = np.random.normal(size=(100, 5, 5))

# approach 1: a double loop
out_hack = np.zeros((5, 5))
for i in range(5):
    for j in range(5):
        out_hack[i, j] = (dummy_data.T[j, j, :]*dummy_data[:, j, i]).sum()

# approach 2: slicing a diagonal from a tensor dot product
out = np.tensordot(dummy_data.T, dummy_data, axes=1)
out = out.diagonal(0, 0, 2).diagonal(0, 0, 2)

print((out.round(6) == out_hack.round(6)).all())
>>> True

Is there a way to find middle ground between these 2 approaches?

np.einsum translates that almost literally -

np.einsum('kjj,kji->ij',dummy_data,dummy_data)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM