- PyroTrainingPlan.backward(*args, **kwargs)¶
Override backward with your own implementation if you need to.
Loss is already scaled by accumulated grads
Current optimizer being used
Index of the current optimizer being used
Called to perform backward step. Feel free to override as needed. The loss passed in has already been scaled for accumulated gradients if requested.
def backward(self, loss, optimizer, optimizer_idx): loss.backward()