第七色在线视频,2021少妇久久久久久久久久,亚洲欧洲精品成人久久av18,亚洲国产精品特色大片观看完整版,孙宇晨将参加特朗普的晚宴

為了賬號安全,請及時綁定郵箱和手機立即綁定
已解決430363個問題,去搜搜看,總會有你想問的

向 Keras 中的自動編碼器添加乘法層

向 Keras 中的自動編碼器添加乘法層

楊__羊羊 2021-08-05 18:14:44
我想在 LSTM 自動編碼器之上添加一個乘法層。乘法層應該將張量乘以一個常數(shù)值。我編寫了以下代碼,無需乘法層即可工作。有誰知道如何調(diào)整并使其工作?import kerasfrom keras import backend as Kfrom keras.models import Sequential, Modelfrom keras.layers import Input, LSTM, RepeatVector, TimeDistributedfrom keras.layers.core import Flatten, Dense, Dropout, Lambdafrom keras.optimizers import SGD, RMSprop, Adamfrom keras import objectivesfrom keras.engine.topology import Layerimport numpy as npclass LayerKMultiply(Layer):    def __init__(self, output_dim, **kwargs):        self.output_dim = output_dim        self.k = Null        super(LayerKMultiply, self).__init__(**kwargs)    def build(self, input_shape):        # Create a trainable weight variable for this layer.        self.k = self.add_weight(            name='k',            shape=(),            initializer='ones',            dtype='float32',            trainable=True,        )        super(LayerKMultiply, self).build(input_shape)  # Be sure to call this at the end    def call(self, x):        #return K.tf.multiply(self.k, x)        return self.k * x    def compute_output_shape(self, input_shape):        return (input_shape[0], self.output_dim)    timesteps, input_dim, latent_dim = 10, 3, 32inputs = Input(shape=(timesteps, input_dim))encoded = LSTM(latent_dim, return_sequences=False, activation='linear')(inputs)decoded = RepeatVector(timesteps)(encoded)decoded = LSTM(input_dim, return_sequences=True, activation='linear')(decoded)decoded = TimeDistributed(Dense(input_dim, activation='linear'))(decoded)#decoded = LayerKMultiply(k = 20)(decoded)sequence_autoencoder = Model(inputs, decoded)encoder = Model(inputs, encoded)autoencoder = Model(inputs, decoded)autoencoder.compile(optimizer='adam', loss='mse')    X = np.array([[[1,2,3,4,5,6,7,8,9,10],[1,2,3,4,5,6,7,8,9,10],[1,2,3,4,5,6,7,8,9,10]]])X = X.reshape(1,10,3)p = autoencoder.predict(x=X, batch_size=1)print(p)
查看完整描述

2 回答

?
侃侃無極

TA貢獻2051條經(jīng)驗 獲得超10個贊

您將位置參數(shù)關鍵字參數(shù)混合在一起。當你定義一個函數(shù)時,就像def __init__(self, output_dim, **kwargs) output_dim是一個位置參數(shù)。你需要:

  • 要么自己通過 20 LayerMultiply(20)(decoded)

  • 或改變 def __init__(self, k=10, **kwargs)

  • output_dim從定義中刪除并使用self.output_dim = kwargs['k']

更多信息在這里。


查看完整回答
反對 回復 2021-08-05
?
墨色風雨

TA貢獻1853條經(jīng)驗 獲得超6個贊

我相信解決方案如下:


import keras

from keras import backend as K

from keras.models import Sequential, Model

from keras.layers import Input, LSTM, RepeatVector, TimeDistributed

from keras.layers.core import Flatten, Dense, Dropout, Lambda

from keras.optimizers import SGD, RMSprop, Adam

from keras import objectives

from keras.engine.topology import Layer

import numpy as np


class LayerKMultiply(Layer):


    def __init__(self, output_dim, **kwargs):

        self.output_dim = output_dim

        self.k = None

        super(LayerKMultiply, self).__init__(**kwargs)


    def build(self, input_shape):

        # Create a trainable weight variable for this layer.

        self.k = self.add_weight(

            name='k',

            shape=(),

            initializer='ones',

            dtype='float32',

            trainable=True,

        )

        super(LayerKMultiply, self).build(input_shape)  # Be sure to call this at the end


    def call(self, x):

        return self.k * x


    def compute_output_shape(self, input_shape):

        return (input_shape[0], input_shape[1], input_shape[2])


查看完整回答
反對 回復 2021-08-05
  • 2 回答
  • 0 關注
  • 251 瀏覽
慕課專欄
更多

添加回答

舉報

0/150
提交
取消
微信客服

購課補貼
聯(lián)系客服咨詢優(yōu)惠詳情

幫助反饋 APP下載

慕課網(wǎng)APP
您的移動學習伙伴

公眾號

掃描二維碼
關注慕課網(wǎng)微信公眾號