簡體   English   中英

使用 Transformers 中的 image_utils 時出現“不支持的圖像尺寸數”

[英]"Unsupported number of image dimensions" while using image_utils from Transformers

我正在嘗試遵循這個 HuggingFace 教程https://huggingface.co/blog/fine-tune-vit

使用他們的“beans”數據集一切正常,但如果我將自己的數據集與我自己的圖像一起使用,我會遇到“不支持的圖像尺寸數”。 我想知道這里是否有人會提供有關如何調試它的指示。

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/tmp/ipykernel_2042949/883871373.py in <module>
----> 1 train_results = trainer.train()
      2 trainer.save_model()
      3 trainer.log_metrics("train", train_results.metrics)
      4 trainer.save_metrics("train", train_results.metrics)
      5 trainer.save_state()

~/miniconda3/lib/python3.9/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
   1532             self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
   1533         )
-> 1534         return inner_training_loop(
   1535             args=args,
   1536             resume_from_checkpoint=resume_from_checkpoint,

~/miniconda3/lib/python3.9/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
   1754 
   1755             step = -1
-> 1756             for step, inputs in enumerate(epoch_iterator):
   1757 
   1758                 # Skip past any already trained steps if resuming training

~/miniconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py in __next__(self)
    626                 # TODO(https://github.com/pytorch/pytorch/issues/76750)
...
--> 119         raise ValueError(f"Unsupported number of image dimensions: {image.ndim}")
    120 
    121     if image.shape[first_dim] in (1, 3):

ValueError: Unsupported number of image dimensions: 2

https://github.com/huggingface/transformers/blob/main/src/transformers/image_utils.py

我試着查看我的數據和他們的數據的形狀,結果是一樣的。

$ prepared_ds['train'][0:2]['pixel_values'].shape
torch.Size([2, 3, 224, 224])

我跟蹤堆棧跟蹤,發現錯誤在infer_channel_dimension_format function 中,所以我寫了這個臟東西來查找有問題的圖像:

from transformers.image_utils import infer_channel_dimension_format
try:
    for i, img in enumerate(prepared_ds["train"]):
        infer_channel_dimension_format(img["pixel_values"])
except ValueError as ve:
    print(i+1)

當我檢查該圖像時,我發現它不像其他圖像那樣是 RGB。

$ ds["train"][8]
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=390x540>,
 'image_file_path': '/data/alamy/img/00000/000001069.jpg',
 'labels': 0}

所以我的解決方案是在我的轉換中添加一個convert('RGB')

def transform(example_batch):
    # Take a list of PIL images and turn them to pixel values
    inputs = feature_extractor([x.convert("RGB") for x in example_batch['image']], return_tensors='pt')

    # Don't forget to include the labels!
    inputs['labels'] = example_batch['labels']
    return inputs

我將嘗試抽出一些時間回到這里,並用一個完全可重現的示例來清理它。 (對不起)

今天遇到同樣的錯誤,使用整理function后,上面的錯誤解決了,

def collate_fn(batch):
    return {
        'pixel_values': torch.stack([x['pixel_values'] for x in batch]),
        'labels': torch.tensor([x['labels'] for x in batch])
    }

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM