简体   繁体   English

pybrain:如何打印网络(节点和权重)

[英]pybrain: how to print a network (nodes and weights)

finally I managed to train a network from a file :) Now I want to print the nodes and the weights, especially the weights, because I want to train the network with pybrain and then implement a NN somewhere else that will use it. 最后我设法从一个文件训练一个网络:)现在我想打印节点和权重,特别是权重,因为我想用pybrain训练网络,然后在其他地方实现NN将使用它。

I need a way to print the layers, the nodes and the weight between nodes, so that I can easily replicate it. 我需要一种方法来打印节点之间的层,节点和权重,以便我可以轻松地复制它。 So far I see I can access the layers using n['in'] for example, and then for example I can do: 到目前为止,我看到我可以使用n ['in']来访问图层,然后例如我可以这样做:

dir(n['in']) [' class ', ' delattr ', ' dict ', ' doc ', ' format ', ' getattribute ', ' hash ', ' init ', ' module ', ' new ', ' reduce ', ' reduce_ex ', ' repr ', ' setattr ', ' sizeof ', ' str ', ' subclasshook ', ' weakref ', '_backwardImplementation', '_forwardImplementation', '_generateName', '_getName', '_growBuffers', '_name', '_nameIds', '_resetBuffers', '_setName', 'activate', 'activateOnDataset', 'argdict', 'backActivate', 'backward', 'bufferlist', 'dim', 'forward', 'getName', 'indim', 'inputbuffer', 'inputerror', 'name', 'offset', 'outdim', 'outputbuffer', 'outputerror', 'paramdim', 'reset', 'sequential', 'setArgs', 'setName', 'shift', 'whichNeuron'] dir(n ['in'])[' class ',' delattr ',' dict ',' doc ',' format ',' getattribute ',' hash ',' init ',' module ',' new ', ' reduce ',' reduce_ex ',' repr ',' setattr ',' sizeof ',' str ',' subclasshook ',' weakref ','_ backwardImplementation','_ forwardImplementation','_ generateName','_ getName','_ growBuffers ','_ name','_ nameIds','_resetBuffers','_ setName','activate','activateOnDataset','argdict','backActivate','backward','bufferlist','dim','forward', 'getName','indim','inputbuffer','inputerror','name','offset','outdim','outputbuffer','outputerror','paramdim','reset','sequential','setArgs ','setName','shift','whichNeuron']

but I dont see how I can access the weights here. 但我不知道如何在这里访问权重。 There is also the params attribute, for example my network is 2 4 1 with bias, and it says: 还有params属性,例如我的网络是2 4 1有偏见,它说:

n.params array([-0.8167133 , 1.00077451, -0.7591257 , -1.1150532 , -1.58789386, 0.11625991, 0.98547457, -0.99397871, -1.8324281 , -2.42200963, 1.90617387, 1.93741167, -2.88433965, 0.27449852, -1.52606976, 2.39446258, 3.01359547]) n.params array([ - 0.8167133,1.00077451,-0.7591257,-1.1150532,-1.58789386,0.11625991,0.98547457,-0.99397871,-1.8324281,-2.42200963,1.90617387,1.93741167,-2.88433965,0.27449852,-1.52606976,2.339446258,3.01359547])

Hard to say what is what, at least with weight connects which nodes. 很难说是什么,至少在重量连接哪些节点。 That's all I need. 这就是我所需要的一切。

There are many ways to access the internals of a network, namely through its "modules" list or its "connections" dictionary. 有许多方法可以访问网络的内部,即通过其“模块”列表或其“连接”字典。 Parameters are stored within those connections or modules. 参数存储在这些连接或模块中。 For example, the following should print all this information for an arbitrary network: 例如,以下内容应打印任意网络的所有此信息:

for mod in net.modules:
    print("Module:", mod.name)
    if mod.paramdim > 0:
        print("--parameters:", mod.params)
    for conn in net.connections[mod]:
        print("-connection to", conn.outmod.name)
        if conn.paramdim > 0:
             print("- parameters", conn.params)
    if hasattr(net, "recurrentConns"):
        print("Recurrent connections")
        for conn in net.recurrentConns:
            print("-", conn.inmod.name, " to", conn.outmod.name)
            if conn.paramdim > 0:
                print("- parameters", conn.params)

If you want something more fine-grained (on the neuron level instead of layer level), you will have to further decompose those parameter vectors -- or, alternatively, construct your network from single-neuron-layers. 如果你想要更细粒度的东西(在神经元级而不是层级),你将不得不进一步分解这些参数向量 - 或者,从单神经元层构建你的网络。

Try this, it worked for me: 试试这个,它对我有用:

def pesos_conexiones(n):
    for mod in n.modules:
        for conn in n.connections[mod]:
            print conn
            for cc in range(len(conn.params)):
                print conn.whichBuffers(cc), conn.params[cc]

The result should be like: 结果应该是:

<FullConnection 'co1': 'hidden1' -> 'out'>
(0, 0) -0.926912942354
(1, 0) -0.964135087592
<FullConnection 'ci1': 'in' -> 'hidden1'>
(0, 0) -1.22895643048
(1, 0) 2.97080368887
(2, 0) -0.0182867906276
(3, 0) 0.4292544603
(4, 0) 0.817440427069
(0, 1) 1.90099230604
(1, 1) 1.83477578625
(2, 1) -0.285569867513
(3, 1) 0.592193396226
(4, 1) 1.13092061631

Maybe this helps (PyBrain for Python 3.2)? 也许这有帮助(PyBrain for Python 3.2)?

C:\tmp\pybrain_examples>\Python32\python.exe
Python 3.2 (r32:88445, Feb 20 2011, 21:29:02) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from pybrain.tools.shortcuts import buildNetwork
>>> from pybrain.structure.modules.tanhlayer import TanhLayer
>>> from pybrain.structure.modules.softmax import SoftmaxLayer
>>>
>>> net = buildNetwork(4, 3, 1,bias=True,hiddenclass = TanhLayer, outclass =   SoftmaxLayer)
>>> print(net)
FeedForwardNetwork-8
Modules:
[<BiasUnit 'bias'>, <LinearLayer 'in'>, <TanhLayer 'hidden0'>, <SoftmaxLayer 'out'>]
Connections:
[<FullConnection 'FullConnection-4': 'hidden0' -> 'out'>, <FullConnection   'FullConnection-5': 'bias' -> 'out'>, <FullConnection
'FullConnection-6': 'bias' -> 'hidden0'>, <FullConnection 'FullConnection-7': 'in' -> 'hidden0'>]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM