简体   繁体   English

通过将权重转换为float16或int来减小模型尺寸

[英]Reduce model size by converting weights to float16 or int

I've a keras model for which I need to reduce its size. 我有一个keras模型,需要减小其尺寸。 What I understood is that I can reduce the size by converting weights stored in layers to float16 or to int. 我了解的是,我可以通过将存储在图层中的权重转换为float16或int来减小尺寸。

I did try to convert float16 and int with below code. 我确实尝试使用以下代码转换float16和int。

# Iterate over all the layers of the network
for layer_idx, layer in enumerate(model.layers):

    # If layer has no weights the move to next layer
    if not layer.get_weights():
        continue

    # Get existing weights
    old_weights = layer.get_weights()

    # List to store new weights
    new_weights = []

    # Iterate over weights
    for idx, weight in enumerate(old_weights):
        # Convert weight and append to new list
        new_weights.append(weight.astype(int))
        # print(weight.dtype)

    model.get_layer(name=layer.name).set_weights(new_weights)

For float16 the, the model size was not reduced and for int I converted weights using above code but facing below error while loading model. 对于float16,模型大小没有减少,对于int,我使用上面的代码转换了权重,但在加载模型时面临以下错误。

 File "network_pruning.py", line 24, in <module>
    custom_objects={'angle_error': angle_error})
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/models.py", line 239, in load_model
    model = model_from_config(model_config, custom_objects=custom_objects)
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/models.py", line 313, in model_from_config
    return layer_module.deserialize(config, custom_objects=custom_objects)
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/layers/__init__.py", line 55, in deserialize
    printable_module_name='layer')
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/utils/generic_utils.py", line 139, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/engine/topology.py", line 2490, in from_config
    process_layer(layer_data)
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/engine/topology.py", line 2476, in process_layer
    custom_objects=custom_objects)
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/layers/__init__.py", line 55, in deserialize
    printable_module_name='layer')
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/utils/generic_utils.py", line 141, in deserialize_keras_object
    return cls.from_config(config['config'])
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/engine/topology.py", line 1253, in from_config
    return cls(**config)
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/engine/topology.py", line 1348, in __init__
    name=self.name)
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 488, in placeholder
    x = tf.placeholder(dtype, shape=shape, name=name)
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1777, in placeholder
    return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4516, in placeholder
    dtype = _execute.make_type(dtype, "dtype")
  File "/home/aditya/miniconda3/envs/python36/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 126, in make_type
    (arg_name, repr(v)))
TypeError: Expected DataType for argument 'dtype' not 'int'.

I am not even sure this is a right way of reducing model size or not. 我什至不确定这是否是减小模型尺寸的正确方法。 It will also be good you anyone can show me ways to reduce model size and complexity. 任何人都可以向我展示减少模型大小和复杂性的方法,这也将是一件好事。

Many thanks in advance!! 提前谢谢了!!

I would not be using integers as weights. 我不会使用整数作为权重。 I can't imagine them being able to be granular enough for you to get the small weight changes needed to solvers like gradient descent etc. to eventually find the global minima(?). 我无法想象它们能够足够细化以使您获得诸如梯度下降等求解器所需的细微权重变化,以最终找到全局最小值(?)。

Put another way, most learning rates values like 0.0001 or 0.001 which when eventually added to or subtracted from (with a bunch of math done to the learning rate) a weight will not change the weights value as anything to the right of the decimal will get dropped due the dtype being an int. 换句话说,大多数学习率值(例如0.0001或0.001)最终被添加或减去(通过对学习率进行一堆数学运算),权重不会改变权重值,因为小数点右边的任何内容都会得到由于dtype为int而被丢弃。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Tensorflow Float16 for VGG19 模型参数 - Tensorflow Float16 for VGG19 model parameters 为什么将输入和模型强制转换为float16不起作用? - Why casting input and model to float16 doesn't work? ONNX 量化 Model 类型错误:类型“张量(float16)” - ONNX Quantized Model Type Error: Type 'tensor(float16)' 是否可以使用 float16 使用 tensorflow 1 进行训练? - Is it possible to train with tensorflow 1 using float16? 在 pandas.to_numeric 中向下转换为 float16 - Downcast to float16 in pandas.to_numeric numpy 的 float16 数据类型是否功能失调? - Is float16 datatype for numpy disfunctional? 错误:传递给参数“输入”的值的数据类型 int64 不在允许值列表中:float16、bfloat16、float32、float64? - Error : Value passed to parameter 'input' has DataType int64 not in list of allowed values: float16, bfloat16, float32, float64? 错误:TypeError:传递给参数“输入”的值的 DataType uint8 不在允许值列表中:float16、bfloat16、float32、float64、int32 - Error: TypeError: Value passed to parameter 'input' has DataType uint8 not in list of allowed values: float16, bfloat16, float32, float64, int32 从float32到float16的numpy astype - numpy astype from float32 to float16 Python numpy float16数据类型操作,还是float8? - Python numpy float16 datatype operations, and float8?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM