Are you trolling? How is the GradientTape remotely similar to `loss.backward()`? And what you're showing in an end-to-end blackbox, not a low-level training loop. Where are the gradients? What does `backward()` do? Might as well show `model.train_step(input_data, target_data)`.
-
-
This is my biggest gripe with pytorch. The actual gradient updating processes is way too abstract. Gradient tape is much clearer.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.