[英]How can i write a byte object file from integers of different length of bytes
I'm new in programming and i want to write bytes from integers of different length to file ex:我是编程新手,我想将不同长度的整数中的字节写入文件 ex:
my_list = [1, 2, 3200 , 60, 72000 ]
with open(Path , "wb") as f:
[f.write(i.to_bytes(3, "big")) for i in my_list]
to read this file i use:阅读这个文件我使用:
i, s = 0, 3
with open(Path, "rb") as f:
rb = f.read()
print([int.from_bytes(rb[i:i+s], 'big') for i in range(i , len(rb), s)])
but this creats for every integer 3 bytes, and it's not a good method, is there a method to create dynamic bytes for each int and to read also, thank you in advance但这为每个 integer 3 个字节创建,这不是一个好方法,有没有一种方法可以为每个 int 创建动态字节并读取,提前谢谢
from pickle import dumps,loads
to write:来写:
my_list = dumps([1, 2, 3200 , 60, 72000 ])
with open(Path , "wb") as f:
f.write(my_list)
to read:读书:
with open(Path, "rb") as f:
rb = loads(f.read())
print(rb)
If you want to store the values in a format that isn't specific to Python then using binary file with the bytes is good.如果您想以不特定于 Python 的格式存储值,那么使用带有字节的二进制文件是好的。
If you want variable length of bytes to represent the integers to save space, then there are other ways to do this.如果您想要可变长度的字节来表示整数以节省空间,那么还有其他方法可以做到这一点。 For example you could use the gzip format.
例如,您可以使用 gzip 格式。
Here is an example that uses the Python gzip library to create a file of a 1000 integers and compare it to non-gzip file with the same content.这是一个使用 Python gzip库创建一个 1000 个整数的文件并将其与具有相同内容的非 gzip 文件进行比较的示例。
It also uses the Python struct library to convert between integers into bytes它还使用 Python结构库将整数转换为字节
import gzip
import struct
from pathlib import Path
import random
path = Path('/tmp/my_file.bin')
path_z = Path('/tmp/my_file.bin.gz')
random.seed('stackoverflow')
data_len = 1000
my_list = [random.randint(0, 16**4) for i in range(data_len)]
print(f"created list: {my_list[:4]}...")
with open(path, "wb") as f:
data = struct.pack(f'>{data_len}I', *my_list)
f.write(data)
with open(path, "rb") as f:
rb = f.read()
read_list = struct.unpack(f'>{data_len}I', rb)
print(f'Normal list: {read_list[:4]}...')
bin_file_size = path.stat().st_size
print(f"Normal Size: {bin_file_size} [bytes]")
with gzip.open(path_z, "wb") as f:
data = struct.pack(f'>{data_len}I', *my_list)
f.write(data)
with gzip.open(path_z, "rb") as f:
rb = f.read()
read_list = struct.unpack(f'>{data_len}I', rb)
print(f'gzip list: {read_list[:4]}...')
gzip_file_size = path_z.stat().st_size
print(f"gzip Size: {gzip_file_size} [bytes]")
print(f"shrunk to {gzip_file_size / bin_file_size * 100} %")
Which gave the following output:其中给出了以下 output:
$ python3 bytes_file.py
created list: [36238, 568, 20603, 3324]...
Normal list: (36238, 568, 20603, 3324)...
Normal Size: 4000 [bytes]
gzip list: (36238, 568, 20603, 3324)...
gzip Size: 2804 [bytes]
shrunk to 70.1 %
These files are still readable by other programs:这些文件仍然可以被其他程序读取:
$ od -A d --endian=big -t u4 --width=4 --read-bytes 16 /tmp/my_file.bin
0000000 36238
0000004 568
0000008 20603
0000012 3324
And also the gzip file:还有 gzip 文件:
$ gunzip -c /tmp/my_file.bin.gz | od -A d --endian=big -t u4 --width=4 --read-bytes 16
0000000 36238
0000004 568
0000008 20603
0000012 3324
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.