簡體   English   中英

Tensorflow Keras:Conv2d 分別定義激活 function 有區別嗎?

[英]Tensorflow Keras: Is there a difference defining the acitvation function separately for Conv2d?

在某些示例中,我看到 Conv2d 層定義如下:

import tensorflow as tf

# ....

model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters = 64, kernel_size = 3, activation="relu"))

# ....

....在其他人中,我看到 model 定義如下:

```python
import tensorflow as tf

# ....

model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters = 64, kernel_size = 3))
model.add(tf.keras.layers.ReLU())

# ....

與 Conv2D 層分開定義激活函數/層之間有什么區別嗎?

只是編程偏好。 將激活作為一層進行可能更具示范性,特別是如果您不做像 ReLU 這樣的常見事情。

import tensorflow as tf
x = tf.random.uniform([1, 100], minval= -1, maxval=1)
init = tf.constant_initializer(np.eye(x.shape[1], x.shape[1]))
dense_baked_relu = keras.layers.Dense(x.shape[1], activation='relu', use_bias=False, kernel_initializer=init)
dense_linear = keras.layers.Dense(x.shape[1], activation='linear', use_bias=False, kernel_initializer=init)
relu_layer = keras.layers.ReLU()
y0 = dense_baked_relu(x)
y1 = relu_layer(dense_linear(x))
print(y1 - y0)

編輯一下,如果您對 Dense 與 Conv2D 之間的一些細微差別持懷疑態度,這里是適用於 Conv2D 的相同概念。

x = tf.random.uniform([1, 100, 100, 1], minval= -1, maxval=1)
sobel = np.zeros((3, 3, 1))
sobel[:, :, 0] = np.array([[1,0,1],[-1,0,-1],[1,0,1]])
init = tf.constant_initializer(sobel)
conv_baked_relu = keras.layers.Conv2D(filters=1, activation='relu', kernel_size=3, use_bias=False, kernel_initializer=init)
conv_linear = keras.layers.Conv2D(filters=1, activation='linear', kernel_size=3, use_bias=False, kernel_initializer=init)
relu_layer = keras.layers.ReLU()
y0 = conv_baked_relu(x)
y1 = relu_layer(conv_linear(x))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM