dgld.utils.early_stopping

class dgld.utils.early_stopping.EarlyStopping(early_stopping_rounds=0, patience=7, verbose=False, delta=0, check_finite=True)[source]

Bases: object

Early stops the training if loss doesn’t improve after a given patience.

Parameters
  • early_stopping_rounds (int, optional) – Start early stopping after early_stopping_rounds, by default 0

  • patience (int, optional) – How long to wait after last time loss improved, by default 7

  • verbose (bool, optional) – If True, prints a message for each loss improvement, by default False

  • delta (int, optional) – Minimum change in the monitored quantity to qualify as an improvement, by default 0

  • check_finite (bool, optional) – When set True, stops training when the monitor becomes NaN or infinite, by default True

Examples

>>> early_stop = EarlyStopping()
>>> for epoch in range(num_epoch):
>>>     res = model(data)
>>>     loss = torch.mean(torch.pow(res - label,2))
>>>     opt.zero_grad()
>>>     loss.backward()
>>>     opt.step()
>>>     early_stop(loss,model)
>>>     if early_stop.isEarlyStopping():
>>>         print(f"Early stopping in round {epoch}")
>>>         break
property best_paramenters

returns: The model.state_dict() of minimal loss :rtype: OrderedDict

property early_stop

returns: Return whether early stopping. :rtype: bool

isEarlyStopping()[source]
save_best_parameters(model)[source]

Saves model.state_dict() of the minimal loss

Parameters

model (torch.nn.modules) – The model