First of all, I have a group of 12 (2x2) matrices.
II = np.identity(2, dtype=complex)
X = np.array([[0, 1], [1, 0]], dtype=complex)
Y = np.array([[0, -1j], [1j, 0]], dtype=complex)
Z = np.array([[1, 0], [0, -1]], dtype=complex)
PPP = (-II + 1j*X + 1j*Y + 1j*Z)/2
PPM = (-II + 1j*X + 1j*Y - 1j*Z)/2
PMM = (-II + 1j*X - 1j*Y - 1j*Z)/2
MMM = (-II - 1j*X - 1j*Y - 1j*Z)/2
MMP = (-II - 1j*X - 1j*Y + 1j*Z)/2
MPP = (-II - 1j*X + 1j*Y + 1j*Z)/2
PMP = (-II + 1j*X - 1j*Y + 1j*Z)/2
MPM = (-II - 1j*X + 1j*Y - 1j*Z)/2
Currently I have a function operator_groups
that draws a random matrix from this group for every j
loop and it gets appended into a list sequence
. The random matrix drawn inbetween all the individual j
loops are then used to do some calculations, irrelevant to our discussion here. At the end of the j
loop, the sequences of the elements of the list sequence
are reversed, then linalg.multi_dot
is performed and then its hermitian conjugate is being taken (hence the .conj().T
)
def operator_groups():
return random.choice([II, X, Y, Z, PPP, PPM, PMM, MMM, MMP, MPP, PMP, MPM])
for i in range(1, sample_size+1, 1):
sequence = []
for j in range(1, some_number, 1):
noise = operator_groups()
"""some matrix calculations here"""
sequence.append(noise)
sequence_inverse = np.linalg.multi_dot(sequence[::-1]).conj().T
Now I wish to vectorize the i
loop , by just doing the j
loop in one big matrix. The noise
is now an ndarray of N matrices(instead of just 1 matrix) randomly sampled from the group, with each matrix representing the iterations of j
, but parallelized. The code now looks something like this.
def operator_groups(sample_size):
return random.sample([II, X, Y, Z], sample_size)
sequence = []
for j in range(1, some_number, 1):
noise = operator_groups(sample_size)
sequence.append(noise)
sequence_inverse = np.linalg.multi_dot(sequence[::-1]).conj().T
Now that sequence
is a multi-dimensional array, I'm having trouble with appending the multidimensional noise
into the right order within sequence
, and then subsequently also problem with performing linalg.multidot
for the inverse of sequence
and taking its Hermitian conjugate. In this case I'd want to multi_dot
the inverse of all the stored up noise
for each j
row corresponding to each of the j
loop. How can this be done?
I'll provide some "pseudo-examples" below to further demonstrate my problem, using j
= 3. For simplicity, here I'll only "randomly draw" X, Y, Z
.
Non-vectorised case:
i = 1
sequence = []
j = 1
noise = X (randomised)
sequence.append(noise)
sequence = [X]
j = 2
noise = Y (randomised)
sequence.append(noise)
sequence = [X, Y]
j = 3
noise = Z (randomised)
sequence.append(noise)
sequence = [X, Y, Z]
end of j loop
take reverse order: [Z, Y, X]
do multi_dot: [ZYX] (Note: dot products, not element-wise multiplication)
take conjugate and tranpose(to get Hermitian): [ZYX].conj().T = [ZYX.conj().T]
Vectorized case(say if I was doing sample_size
= 3):
sequence = []
j = 1
noise = [X,Z,Y](randomised)
sequence.append(noise)
sequence = [[X,Z,Y]]
j = 2
noise = [Z,Y,X] (randomised)
sequence.append(noise)
sequence = [[X,Z,Y],
[Z,Y,X]]
j = 3
noise = [Z,Z,X] (randomised)
sequence.append(noise)
sequence = [[X,Z,Y],
[Z,Y,X],
[Z,Z,X]]
end of j loop
take reverse order: [[Z,Z,X],
[Z,Y,X],
[X,Z,Y]]
do multi_dot(along an axis,
which is what I have trouble with): [ZZX,ZYZ,XXY]
take conjugate and tranpose(to get Hermitian):
[ZZX,ZYZ,XXY].conj().T = [ZZX.conj().T, ZYZ.conj().T, XXY.conj().T]
I hope these examples demonstrate my problem
With your two random selectors:
In [13]: operator_groups() # returns one (2,2) array
Out[13]:
array([[-0.5+0.5j, 0.5-0.5j],
[-0.5-0.5j, -0.5-0.5j]])
In [14]: operator_groups1(4) # returns a list of (2,2) arrays
Out[14]:
[array([[0.+0.j, 1.+0.j],
[1.+0.j, 0.+0.j]]), array([[ 0.+0.j, -0.-1.j],
[ 0.+1.j, 0.+0.j]]), array([[ 1.+0.j, 0.+0.j],
[ 0.+0.j, -1.+0.j]]), array([[1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j]])]
Your loop creates a list of arrays:
In [15]: seq=[]
...: for j in range(4):
...: seq.append(operator_groups())
...:
In [16]: seq
Out[16]:
[array([[-0.5-0.5j, -0.5+0.5j],
[ 0.5+0.5j, -0.5+0.5j]]), array([[1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j]]), array([[-0.5+0.5j, -0.5-0.5j],
[ 0.5-0.5j, -0.5-0.5j]]), array([[-0.5-0.5j, 0.5-0.5j],
[-0.5-0.5j, -0.5+0.5j]])]
which can be given to multi_dot
for sequential dotting:
In [17]: np.linalg.multi_dot(seq)
Out[17]:
array([[0.-1.j, 0.+0.j],
[0.+0.j, 0.+1.j]])
If we build the sequence with the groups selector, we get a list of lists:
In [18]: seq=[]
...: for j in range(4):
...: seq.append(operator_groups1(3))
...:
In [19]: seq
Out[19]:
[[array([[ 0.+0.j, -0.-1.j],
[ 0.+1.j, 0.+0.j]]), array([[ 1.+0.j, 0.+0.j],
[ 0.+0.j, -1.+0.j]]), array([[0.+0.j, 1.+0.j],
[1.+0.j, 0.+0.j]])], [array([[ 0.+0.j, -0.-1.j],
[ 0.+1.j, 0.+0.j]]), array([[ 1.+0.j, 0.+0.j],
[ 0.+0.j, -1.+0.j]]), array([[0.+0.j, 1.+0.j],
[1.+0.j, 0.+0.j]])], [array([[1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j]]), array([[ 1.+0.j, 0.+0.j],
[ 0.+0.j, -1.+0.j]]), array([[ 0.+0.j, -0.-1.j],
[ 0.+1.j, 0.+0.j]])], [array([[1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j]]), array([[ 1.+0.j, 0.+0.j],
[ 0.+0.j, -1.+0.j]]), array([[ 0.+0.j, -0.-1.j],
[ 0.+1.j, 0.+0.j]])]]
In [20]: len(seq)
Out[20]: 4
In [21]: len(seq[0])
Out[21]: 3
We can 'stack' the inner lists, creating a list of (n,2,2) arrays:
In [22]: seq1 = [np.stack(el) for el in seq]
In [23]: seq1
Out[23]:
[array([[[ 0.+0.j, -0.-1.j],
[ 0.+1.j, 0.+0.j]],
[[ 1.+0.j, 0.+0.j],
[ 0.+0.j, -1.+0.j]],
[[ 0.+0.j, 1.+0.j],
[ 1.+0.j, 0.+0.j]]]), array([[[ 0.+0.j, -0.-1.j],
[ 0.+1.j, 0.+0.j]],
[[ 1.+0.j, 0.+0.j],
[ 0.+0.j, -1.+0.j]],
[[ 0.+0.j, 1.+0.j],
[ 1.+0.j, 0.+0.j]]]), array([[[ 1.+0.j, 0.+0.j],
[ 0.+0.j, 1.+0.j]],
[[ 1.+0.j, 0.+0.j],
[ 0.+0.j, -1.+0.j]],
[[ 0.+0.j, -0.-1.j],
[ 0.+1.j, 0.+0.j]]]), array([[[ 1.+0.j, 0.+0.j],
[ 0.+0.j, 1.+0.j]],
[[ 1.+0.j, 0.+0.j],
[ 0.+0.j, -1.+0.j]],
[[ 0.+0.j, -0.-1.j],
[ 0.+1.j, 0.+0.j]]])]
we can then apply matmul
repeatedly on this list:
In [25]: res = seq1[0]
...: for el in seq1[1:]:
...: res = res@el
...:
...:
In [26]: res
Out[26]:
array([[[1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j]],
[[1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j]],
[[1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j]]])
In effect matmul
is like dot
, but it treats the leading dimension(s) as a 'batch' dimension.
With random selection it's a pain to compare different results (unless I set the seed), so I leave the verification up to you.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.