配置中有一些參數(shù),特別是當(dāng)我更改max_len,hidden_size或embedding_size.config = { "max_len": 64, "hidden_size": 64, "vocab_size": vocab_size, "embedding_size": 128, "n_class": 15, "learning_rate": 1e-3, "batch_size": 32, "train_epoch": 20}我收到一個(gè)錯(cuò)誤:“ValueError:無法為張量'Placeholder:0'提供形狀(32、32)的值,其形狀為'(?,64)'”下面的張量流圖是我理解有問題的。有沒有辦法了解什么親戚max_len,hidden_size或embedding_size參數(shù)需要進(jìn)行設(shè)置,以避免我得到上述錯(cuò)誤? embeddings_var = tf.Variable(tf.random_uniform([self.vocab_size, self.embedding_size], -1.0, 1.0), trainable=True) batch_embedded = tf.nn.embedding_lookup(embeddings_var, self.x) # multi-head attention ma = multihead_attention(queries=batch_embedded, keys=batch_embedded) # FFN(x) = LN(x + point-wisely NN(x)) outputs = feedforward(ma, [self.hidden_size, self.embedding_size]) outputs = tf.reshape(outputs, [-1, self.max_len * self.embedding_size]) logits = tf.layers.dense(outputs, units=self.n_class) self.loss = tf.reduce_mean( tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=self.label)) self.prediction = tf.argmax(tf.nn.softmax(logits), 1) # optimization loss_to_minimize = self.loss tvars = tf.trainable_variables() gradients = tf.gradients(loss_to_minimize, tvars, aggregation_method=tf.AggregationMethod.EXPERIMENTAL_TREE) grads, global_norm = tf.clip_by_global_norm(gradients, 1.0) self.global_step = tf.Variable(0, name="global_step", trainable=False) self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate) self.train_op = self.optimizer.apply_gradients(zip(grads, tvars), global_step=self.global_step, name='train_step') print("graph built successfully!")
如何在基于注意力的模型中為配置設(shè)置參數(shù)?
慕尼黑8549860
2022-01-11 17:20:23