我試圖通過計(jì)算函數(shù)中的值來獲得梯度下降。并在我的代碼中出錯(cuò)def gradient_descent(X, y, theta, alpha, num_iters):m = len(y)cost_history = np.zeros(num_iters)theta_history = np.zeros((num_iters,2))for i in range(num_iters): prediction = np.reshape(np.dot(np.transpose(theta), X),97) theta = theta -(1/m)*alpha*( X.T.dot((prediction - y))) theta_history[i,:] =theta.T J_history[i] = cal_cost(theta,X,y)return theta, J_history"""Args----X (numpy mxn array) - The example inputs, first column is expected to be all 1's.y (numpy m array) - A vector of the correct outputs of length mtheta (numpy nx1 array) - An array of the set of theta parameters to evaluatealpha (float) - The learning rate to use for the iterative gradient descentnum_iters (int) - The number of gradient descent iterations to performReturns-------theta (numpy nx1 array) - The final theta parameters discovered after out gradient descent.J_history (numpy num_itersx1 array) - A history of the calculated cost for each iteration of our descent."""以下是我傳遞給函數(shù)和變量的參數(shù)theta = np.zeros( (2, 1) )iterations = 1500;alpha = 0.01theta, J = gradient_descent(X, y, theta, alpha, iterations)錯(cuò)誤信息是:ValueError:形狀(97,2)和(97,)未對(duì)齊:2(dim 1)!= 97(dim 0)
1 回答

大話西游666
TA貢獻(xiàn)1817條經(jīng)驗(yàn) 獲得超14個(gè)贊
我不確定你在哪里得到 ValueError,但形狀為 (97,) 的 ndarray 需要np.expand_dims
在其上運(yùn)行,如下所示:
np.expand_dims(vector, axis=-1)
這將使向量具有形狀 (97,1),因此它應(yīng)該被對(duì)齊/能夠被廣播。
添加回答
舉報(bào)
0/150
提交
取消