简体   繁体   中英

Pytorch Global Pruning is not reducing the size of the model

I am trying to Prune my Deep Learning model via Global Pruning. The original UnPruned model is about 77.5 MB. However after pruning, when I am saving the model, the size of the model is the same as the original. Can anyone help me with this issue?

Below is the Pruning code:-

import torch.nn.utils.prune as prune

parameters_to_prune = (
(model.encoder[0], ‘weight’),
(model.up_conv1[0], ‘weight’),
(model.up_conv2[0], ‘weight’),
(model.up_conv3[0], ‘weight’),
)
print(parameters_to_prune)

prune.global_unstructured(
parameters_to_prune,
pruning_method=prune.L1Unstructured,
amount=0.2,
)

print(
“Sparsity in Encoder.weight: {:.2f}%”.format(
100. * float(torch.sum(model.encoder[0].weight == 0))
/ float(model.encoder[0].weight.nelement())
)
)
print(
“Sparsity in up_conv1.weight: {:.2f}%”.format(
100. * float(torch.sum(model.up_conv1[0].weight == 0))
/ float(model.up_conv1[0].weight.nelement())
)
)
print(
“Sparsity in up_conv2.weight: {:.2f}%”.format(
100. * float(torch.sum(model.up_conv2[0].weight == 0))
/ float(model.up_conv2[0].weight.nelement())
)
)
print(
“Sparsity in up_conv3.weight: {:.2f}%”.format(
100. * float(torch.sum(model.up_conv3[0].weight == 0))
/ float(model.up_conv3[0].weight.nelement())
)
)

print(
“Global sparsity: {:.2f}%”.format(
100. * float(
torch.sum(model.encoder[0].weight == 0)
+ torch.sum(model.up_conv1[0].weight == 0)
+ torch.sum(model.up_conv2[0].weight == 0)
+ torch.sum(model.up_conv3[0].weight == 0)
)
/ float(
model.encoder[0].weight.nelement()
+ model.up_conv1[0].weight.nelement()
+ model.up_conv2[0].weight.nelement()
+ model.up_conv3[0].weight.nelement()
)
)
)

**Setting Pruning to Permanent**
prune.remove(model.encoder[0], “weight”)
prune.remove(model.up_conv1[0], “weight”)
prune.remove(model.up_conv2[0], “weight”)
prune.remove(model.up_conv3[0], “weight”)

**Saving the model**
PATH = “C:\PrunedNet.pt”
torch.save(model.state_dict(), PATH)

Prunning won't change the model size if applied like this.

If you have a tensor, say something like:

[1., 2., 3., 4., 5., 6., 7., 8.]

And you prune 50% of data, so for example this:

[1., 2., 0., 4., 0., 6., 0., 0.]

You will still have 8 float values and their size will be the same.

When prunning reduces model size?

  • When we save weights in a sparse format, but it should have high sparsity (so 10% non-zero elements)
  • When we actually remove something (like a kernel from Conv2d , it could be removed if it's weights are zero or negligible)

Otherwise it's not going to work. Check out some related projects that would allow you to do it without coding it in on your own, for example Torch-Pruning .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM