I have large 2-D array (typically 0.5 to 2GB) of dimension of nx 1008. This array contains the several images and the values in the array are actually the pixel value. Basically what is done to recover these images is as follow
This is my solution
counter=0
dump=np.array([], dtype=np.uint16)
#pixelDat is the array shaped n x 1008 containing the pixel values
for j in xrange(len(pixelDat)):
#Check if it is the last row for a particular image
if(j == (260*(counter+1)+ counter)):
counter += 1
dump=np.append(dump, pixelDat[j][:64])
#Reshape dump to form the image and write it to a fits file
hdu = fits.PrimaryHDU(np.reshape(dump, (512,512)))
hdu.writeto('img'+str("{0:0>4}".format(counter))+'.fits', clobber=True)
#Clear dump to enable formation of next image
dump=np.array([], dtype=np.uint16)
else:
dump=np.append(dump, pixelDat[j])
I have been wondering if there is a way to speed up this whole process. The first thing that came to my mind is using vectorized numpy operations. However I am not very sure how to apply it in this case.
PS: Do not worry about the fits and hdu part. Its just creating a .fits file for my image.
Here is an attempt using flattening and np.split
. It avoids copying data.
def chop_up(pixelDat):
sh = pixelDat.shape
try:
# since the array is large we do not want a copy
# the next line will succeed only if we can reshape in-place
pixelDat.shape = -1
except:
return False # user must resort to other method
N = len(pixelDat)
split = (np.arange(0, N, 261*1008)[:, None] + (0, 512*512)).ravel()[1:]
if split[-1] > N:
split = split[:-2]
result = [x.reshape(512,512) for x in np.split(pixelDat, split) if len(x) == 512*512]
pixelDat.shape = sh
return result
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.