繁体   English   中英

如何从大型文本文件构建数据集而不会出现 memory 错误?

[英]How to build a dataset from a large text file without getting a memory error?

我有一个大小 > 7.02 GB 的文本文件。 我已经基于这个文本文件构建了一个标记器。 我想像这样构建一个数据集:

from transformers import LineByLineTextDataset

dataset = LineByLineTextDataset(
    tokenizer=tokenizer,
    file_path="data.txt", block_size=128,)

由于我的数据量很大,所以会出现 memory 错误。 这是源代码:

with open(file_path, encoding="utf-8") as f:
        lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]

    batch_encoding = tokenizer(lines, add_special_tokens=True, truncation=True, max_length=block_size)
    print(batch_encoding)
    self.examples = batch_encoding["input_ids"]
    self.examples = [{"input_ids": torch.tensor(e, dtype=torch.long)} for e in self.examples]

假设我的文本文件只有 4 行,将打印以下内容:

{'input_ids': [[49, 93, 1136, 1685, 973, 363, 72, 3130, 16502, 18], [44, 73, 1685, 279, 7982, 18, 225], [56, 13005, 1685, 4511, 3450, 18], [56, 19030, 1685, 7544, 18]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]}

我已将源代码更改如下,以便不会出现 memory 错误:

for line in open(file_path, encoding="utf-8"):
        if (len(line) > 0 and not line.isspace()):
            new_line = line.split()

            batch_encoding = tokenizer(new_line, add_special_tokens=True, truncation=True, max_length=block_size)
            print(batch_encoding)
            print(type(batch_encoding))
            self.examples = batch_encoding["input_ids"]
            self.examples = [{"input_ids": torch.tensor(e, dtype=torch.long)} for e in self.examples]
print(batch_encoding)

但是,将打印以下内容:

{'input_ids': [[49, 93], [3074], [329], [2451, 363, 72, 3130, 16502, 18]], 'token_type_ids': [[0, 0], [0], [0], [0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1], [1], [1], [1, 1, 1, 1, 1, 1]]}
<class 'transformers.tokenization_utils_base.BatchEncoding'>
{'input_ids': [[44, 73], [329], [69], [23788, 18]], 'token_type_ids': [[0, 0], [0], [0], [0, 0]], 'attention_mask': [[1, 1], [1], [1], [1, 1]]}
<class 'transformers.tokenization_utils_base.BatchEncoding'>
{'input_ids': [[56, 13005], [329], [7522], [7958, 18]], 'token_type_ids': [[0, 0], [0], [0], [0, 0]], 'attention_mask': [[1, 1], [1], [1], [1, 1]]}
<class 'transformers.tokenization_utils_base.BatchEncoding'>
{'input_ids': [[56, 19030], [329], [11639, 18]], 'token_type_ids': [[0, 0], [0], [0, 0]], 'attention_mask': [[1, 1], [1], [1, 1]]}
{'input_ids': [[56, 19030], [329], [11639, 18]], 'token_type_ids': [[0, 0], [0], [0, 0]], 'attention_mask': [[1, 1], [1], [1, 1]]}

如何更改源代码以便能够逐行读取大文本文件,但获得相同的 output 而不会出现 memory 错误?

您可以为.txt文件的每一行创建一个存储字节偏移量的字典

offset_dict = {}

with open(large_file_path, 'rb') as f:
    f.readline()  # move over header
    for line in range(number_of_lines):
        offset = f.tell()
            offset_dict[line] = offset

然后在 PyTorch 数据集(然后可以通过 DataLoader 访问)中实现您自己的散列__getitem__方法:

class ExampleDataset(Dataset):
    def __init__(self, large_file_path, offset_dict, ):
        self.large_file_path = large_file_path
        self.offset_dict = offset_dict
    
    def __len__(self):
        return len(self.offset_dict)
    
    def __getitem__(self, line):
        offset = self.offset_dict[line]
        with open(self.large_file_path, 'r', encoding='utf-8') as f:
            f.seek(offset)
            line = f.readline()
            return line

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM