[英]Features as input of neural network
这就是我定义神经网络的方式
import tensorflow as tf
class MyFun:
def __init__(self, x, y, sizes, activations, scope):
with tf.variable_scope(scope):
last_out = tf.concat([x, y], axis=1)
for l, size in enumerate(sizes):
last_out = tf.layers.dense(last_out, size, activation=activations[l])
self.vars = tf.trainable_variables(scope=scope)
self.output = last_out
在将特征输入网络之前,我需要对特征使用预处理输入x
和y
(两个占位符)。 更具体地说,我想使用二次特征,即
new_input = [1, x, y, x**2, y**2, cross(x,y)]
其中cross(x,y)
包括[x, y]
所有元素之间的乘积,即
cross(x,y) = [x_1*x_2, x_1*x_3, ..., x_1*y_1, ...]
如何优雅地做? 是否有sklearn.preprocessing.PolynomialFeatures
的等效sklearn.preprocessing.PolynomialFeatures
?
这是一个选择:
# Suppose your placeholders are one dimensional vectors, with sizes 3 and 7:
x = tf.placeholder(tf.float32,shape=[3])
y = tf.placeholder(tf.float32, shape=[7])
# concat the constant 1.0 with x and y:
z = tf.concat((tf.constant(1.0,shape=(1,)),x,y),axis=0)
# construct all products of pairs:
new_input = [z[i]*z[j] for i in range(3+7-1) for j in range(i,3+7)]
# convert the list of tensors to a tensor (optional):
new_input = tf.stack(new_input)
编辑1
将其扩展到x
和y
具有批处理维的情况:
x = tf.placeholder(tf.float32,shape=[None,3])
y = tf.placeholder(tf.float32, shape=[None,7])
# I use 1.0+0*x[:,:1] instead of tf.constant(1.0)
z = tf.concat((1.0+0*x[:,:1],x,y),axis=1)
new_input = [z[:,i]*z[:,j] for i in range(3+7-1) for j in range(i,3+7)]
new_input = tf.stack(new_input,1)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.