3 回答

TA貢獻(xiàn)1993條經(jīng)驗(yàn) 獲得超6個(gè)贊
def train_epoch(model, data_loader, loss_fn, optimizer, device, scheduler, n_examples):
"""
docstring?
"""
losses = []
for d in data_loader:
input_ids = d['input_ids'].to(device)
targets = d['targets'].to(device)
outputs = model(input_ids = input_ids, labels = targets)
loss = loss_fn(outputs, targets)
losses.append( loss.item() )
loss.backward()
optimizer.step()
scheduler.step()
optimizer.zero_grad()
return np.mean(losses)
請像這樣格式化。您的代碼不在您的train_epoch()方法的縮進(jìn)塊下。

TA貢獻(xiàn)1875條經(jīng)驗(yàn) 獲得超5個(gè)贊
我修改了它:for循環(huán)必須向右縮進(jìn)才能使其函數(shù)代碼運(yùn)行或在函數(shù)內(nèi)部,否則它不會(huì)被視為函數(shù)代碼。此外,在這種情況下,損失列表必須位于 for 循環(huán)內(nèi)部,而不是在其之前,也不能位于同一 for 循環(huán)級(jí)別。嘗試一下然后告訴我。如果成功投票并回答:-)
def train_epoch(model, data_loader, loss_fn, optimizer, device, scheduler, n_examples):
for d in data_loader:
losses = []
input_ids = d['input_ids'].to(device)
targets = d['targets'].to(device)
outputs = model(input_ids = input_ids, labels = targets)
loss = loss_fn(outputs, targets)
losses.append( loss.item() )
loss.backward()
optimizer.step()
scheduler.step()
optimizer.zero_grad()
return np.mean(losses)

TA貢獻(xiàn)1780條經(jīng)驗(yàn) 獲得超5個(gè)贊
這是因?yàn)榈谝恍?。函?shù)體必須縮進(jìn)。
def train_epoch(model, data_loader, loss_fn, optimizer, device, scheduler, n_examples):
""" docstring? """
losses = []
for d in data_loader:
input_ids = d['input_ids'].to(device)
targets = d['targets'].to(device)
outputs = model(input_ids = input_ids, labels = targets)
loss = loss_fn(outputs, targets)
losses.append( loss.item() )
loss.backward()
optimizer.step()
scheduler.step()
optimizer.zero_grad()
return np.mean(losses)
添加回答
舉報(bào)