繁体   English   中英

pytorch 的 nn.Module 如何注册子模块?

[英]How does pytorch's nn.Module register submodule?

当我阅读 torch.nn.Module 的源代码(python)时,我发现属性self._modules已用于许多函数中,例如self.modules(), self.children()等。但是,我没有找到任何更新它的功能。 那么, self._modules将在self._modules更新? 此外,pytorch 的nn.Module如何注册子模块的?

class Module(object):
    def __init__(self):
        self._backend = thnn_backend
        self._parameters = OrderedDict()
        self._buffers = OrderedDict()
        self._backward_hooks = OrderedDict()
        self._forward_hooks = OrderedDict()
        self._forward_pre_hooks = OrderedDict()
        self._modules = OrderedDict()
        self.training = True

    def named_modules(self, memo=None, prefix=''):
        if memo is None:
            memo = set()
        if self not in memo:
            memo.add(self)
            yield prefix, self
            for name, module in self._modules.items():
                if module is None:
                    continue
                submodule_prefix = prefix + ('.' if prefix else '') + name
                for m in module.named_modules(memo, submodule_prefix):
                    yield m

模块和参数通常通过为nn.module的实例设置属性来注册。 特别是,这种行为是通过__setattr__方法实现的:

def __setattr__(self, name, value):
        def remove_from(*dicts):
            for d in dicts:
                if name in d:
                    del d[name]

        params = self.__dict__.get('_parameters')
        if isinstance(value, Parameter):
            if params is None:
                raise AttributeError(
                    "cannot assign parameters before Module.__init__() call")
            remove_from(self.__dict__, self._buffers, self._modules)
            self.register_parameter(name, value)
        elif params is not None and name in params:
            if value is not None:
                raise TypeError("cannot assign '{}' as parameter '{}' "
                                "(torch.nn.Parameter or None expected)"
                                .format(torch.typename(value), name))
            self.register_parameter(name, value)
        else:
            modules = self.__dict__.get('_modules')
            if isinstance(value, Module):
                if modules is None:
                    raise AttributeError(
                        "cannot assign module before Module.__init__() call")
                remove_from(self.__dict__, self._parameters, self._buffers)
                modules[name] = value
            elif modules is not None and name in modules:
                if value is not None:
                    raise TypeError("cannot assign '{}' as child module '{}' "
                                    "(torch.nn.Module or None expected)"
                                    .format(torch.typename(value), name))
                modules[name] = value
            else:
                buffers = self.__dict__.get('_buffers')
                if buffers is not None and name in buffers:
                    if value is not None and not isinstance(value, torch.Tensor):
                        raise TypeError("cannot assign '{}' as buffer '{}' "
                                        "(torch.Tensor or None expected)"
                                        .format(torch.typename(value), name))
                    buffers[name] = value
                else:
                    object.__setattr__(self, name, value)

请参阅https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py以找到此方法。

在Jiren Jin的回答中添加一些细节:

  • 网络层(从nn.Module继承)存储在Module._modules ,它在__construct初始化:

     def __init__(self): self.__construct() # initialize self.training separately from the rest of the internal # state, as it is managed differently by nn.Module and ScriptModule self.training = True def __construct(self): """ Initializes internal Module state, shared by both nn.Module and ScriptModule. """ # ... self._modules = OrderedDict()
  • self._modules__setattr__更新。 __setattr__(obj, name, value)obj.name = value执行时被调用。 例如,如果在初始化继承自nn.Module的网络时定义self.conv1 = nn.Conv2d(128, 256, 3, 1, 1) ,则会执行来自nn.Module.__setattr__的以下代码:

     def __setattr__(self, name, value): def remove_from(*dicts): for d in dicts: if name in d: del d[name] params = self.__dict__.get('_parameters') if isinstance(value, Parameter): # ... elif params is not None and name in params: # ... else: modules = self.__dict__.get('_modules') # equivalent to modules = self._modules if isinstance(value, Module): if modules is None: raise AttributeError( "cannot assign module before Module.__init__() call") remove_from(self.__dict__, self._parameters, self._buffers) # register the given layer (nn.Conv2d) with its name (conv1) # equivalent to self._modules['conv1'] = nn.Conv2d(128, 256, 3, 1, 1) modules[name] = value

来自评论的问题:

您知道这是如何通过 Torch 允许您提供自己的转发方法这一事实的吗?

如果运行从nn.Module继承的网络的前向传递,则nn.Module.__call__将被调用,其中self.forward被调用。 但是,在实施网络时,人们已经覆盖了forward

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM