2 回答

TA貢獻1799條經(jīng)驗 獲得超9個贊
如果你想有一個自定義的預(yù)處理層,實際上你不需要使用PreprocessingLayer
. 您可以簡單地繼承Layer
以最簡單的預(yù)處理層Rescaling為例,它在tf.keras.layers.experimental.preprocessing.Rescaling
命名空間下。但是,如果您檢查實際實現(xiàn),它只是子Layer
類 class Source Code Link Here但有@keras_export('keras.layers.experimental.preprocessing.Rescaling')
@keras_export('keras.layers.experimental.preprocessing.Rescaling')
class Rescaling(Layer):
"""Multiply inputs by `scale` and adds `offset`.
For instance:
1. To rescale an input in the `[0, 255]` range
to be in the `[0, 1]` range, you would pass `scale=1./255`.
2. To rescale an input in the `[0, 255]` range to be in the `[-1, 1]` range,
you would pass `scale=1./127.5, offset=-1`.
The rescaling is applied both during training and inference.
Input shape:
Arbitrary.
Output shape:
Same as input.
Arguments:
scale: Float, the scale to apply to the inputs.
offset: Float, the offset to apply to the inputs.
name: A string, the name of the layer.
"""
def __init__(self, scale, offset=0., name=None, **kwargs):
self.scale = scale
self.offset = offset
super(Rescaling, self).__init__(name=name, **kwargs)
def call(self, inputs):
dtype = self._compute_dtype
scale = math_ops.cast(self.scale, dtype)
offset = math_ops.cast(self.offset, dtype)
return math_ops.cast(inputs, dtype) * scale + offset
def compute_output_shape(self, input_shape):
return input_shape
def get_config(self):
config = {
'scale': self.scale,
'offset': self.offset,
}
base_config = super(Rescaling, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
所以它證明Rescaling預(yù)處理只是另一個普通層。
主要部分是def call(self, inputs)函數(shù)。您可以創(chuàng)建任何復(fù)雜的邏輯來預(yù)處理您的邏輯inputs然后返回。
可以在此處找到有關(guān)自定義層的更簡單的文檔
簡而言之,您可以按層進行預(yù)處理,可以通過 Lambda 進行簡單操作,也可以通過子類化 Layer 來實現(xiàn)您的目標。

TA貢獻1818條經(jīng)驗 獲得超8個贊
我認為最好和更干凈的解決方案是使用一個簡單的 Lambda 層,您可以在其中包裝預(yù)處理函數(shù)
這是一個虛擬的工作示例
import numpy as np
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
X = np.random.randint(0,256, (200,32,32,3))
y = np.random.randint(0,3, 200)
inp = Input((32,32,3))
x = Lambda(lambda x: x/255)(inp)
x = Conv2D(8, 3, activation='relu')(x)
x = Flatten()(x)
out = Dense(3, activation='softmax')(x)
m = Model(inp, out)
m.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history = m.fit(X, y, epochs=10)
添加回答
舉報