Q. in GANs, why don't discriminators produce per-pixel losses for the generator's backprop? Surely a bad image is not due to all pixels equally, but the D produces a global loss anyway and throws away more precise supervision. Is it a perf/RAM limitation, or something deeper?
The last idea gets you constant predictions for all pixels, I expect. Training D this way seems like the obvious hard part of the problem, requiring cleverness and probably not working.
-
-
Possibly, yeah. On the other hand, REINFORCE and GANs don't seem like they should work either with crude average global feedback (and often don't). But it seems like getting richer supervision out of Ds is something obvious which someone should've tried but no one has...
-
I've seen "train D to output probabilities, but train G to match activations in pre-final layer(s) of D". Played myself with training D to estimate distance from image to reconstructed image in a VAEGAN, didn't help (when I clumsily tried) even though it's a more pixelwise idiom.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.