[英]Making custom activation function in tensorflow 2.0
I am trying to create a custom tanh() activation function in tensorflow to work with a particular output range that I want.我正在尝试在 tensorflow 中创建自定义 tanh() 激活 function 以使用我想要的特定 output 范围。 I want my network to output concentration multipliers, so I figured if the output of tanh() were negative it should return a value between 0 and 1, and if it were positive to output a value between 1 and 10.
我希望我的网络到 output 浓度乘数,所以我想如果 tanh() 的 output 是负数,它应该返回一个介于 0 和 1 之间的值,如果它是正的 Z78E6221F6393D1356DZF68 和 14DBCE6868 之间的值。
Here is what I currently have这是我目前拥有的
def output_activation(x):
# function to scale tanh activation to be 1-10 if x > 0, or 0-1 if x < 0
return tf.cond(x >= 0, lambda: tf.math.tanh(x+0.1)*10, lambda: tf.math.tanh(x) + 1)
I believe this will work with a single value, but I want to output a vector of values, to which python throws a value error ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
我相信这将适用于单个值,但我想 output 一个值向量,python 会向其中抛出一个值错误
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Tensors are immutable and, from my understanding, converting to a numpy array and back will slow down network training if I am on a GPU.张量是不可变的,据我了解,如果我在 GPU 上,转换为 numpy 数组并返回会减慢网络训练速度。 What is the best way to get around this error but still keep the benefits of hardware acceleration?
解决此错误但仍保留硬件加速优势的最佳方法是什么?
I suggest you tf.keras.backend.switch
.我建议你
tf.keras.backend.switch
。 Here a dummy example这是一个虚拟示例
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
from tensorflow.keras import backend as K
def output_activation(x):
return K.switch(x >= 0, tf.math.tanh(x+0.1)*10, tf.math.tanh(x) + 1)
X = np.random.uniform(0,1, (100,10))
y = np.random.uniform(0,1, 100)
inp = Input((10,))
x = Dense(8, activation=output_activation)(inp)
out = Dense(1)(x)
model = Model(inp, out)
model.compile('adam', 'mse')
model.fit(X,y, epochs=3)
here the running notebook: https://colab.research.google.com/drive/1T_kRNUphJt9xTjiOheTgoIGpGDZaaRAg?usp=sharing这里正在运行的笔记本: https://colab.research.google.com/drive/1T_kRNUphJt9xTjiOheTgoIGpGDZaaRAg?usp=sharing
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.