I mean, there's not even any clear connection between `opt` and `model/loss` in that code snippet
Are you trolling? How is the GradientTape remotely similar to `loss.backward()`? And what you're showing in an end-to-end blackbox, not a low-level training loop. Where are the gradients? What does `backward()` do? Might as well show `model.train_step(input_data, target_data)`.
-
-
-
This is my biggest gripe with pytorch. The actual gradient updating processes is way too abstract. Gradient tape is much clearer.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.