簡體   English   中英

Pytorch object 檢測 model 優化

[英]Pytorch object detection model optimization

我想減小 object 檢測 model 大小。 For the same, I tried optimising Faster R-CNN model for object detection using pytorch-mobile optimiser, but the .pt zip file generated is of the same size as that of the original model size.

我使用了下面提到的代碼

import torch
import torchvision
from torch.utils.mobile_optimizer import optimize_for_mobile

model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)

model.eval()
script_model = torch.jit.script(model)
from torch.utils.mobile_optimizer import optimize_for_mobile
script_model_vulkan = optimize_for_mobile(script_model, backend='Vulkan')
torch.jit.save(script_model_vulkan, "frcnn.pth")

你必須先量化你的 model
在此處按照以下步驟操作
& 然后使用這些方法

from torch.utils.mobile_optimizer import optimize_for_mobile
script_model_vulkan = optimize_for_mobile(script_model, backend='Vulkan')
torch.jit.save(script_model_vulkan, "frcnn.pth")

編輯:

resnet50 model的量化過程

import torchvision
model = torchvision.models.resnet50(pretrained=True)
import os
import torch

def print_model_size(mdl):
    torch.save(mdl.state_dict(), "tmp.pt")
    print("%.2f MB" %(os.path.getsize("tmp.pt")/1e6))
    os.remove('tmp.pt')
print_model_size(model) # will print original model size
backend = "qnnpack"
model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(model, inplace=False)
model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False)



print_model_size(model_static_quantized) ## will print quantized model size

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM