Q. in GANs, why don't discriminators produce per-pixel losses for the generator's backprop? Surely a bad image is not due to all pixels equally, but the D produces a global loss anyway and throws away more precise supervision. Is it a perf/RAM limitation, or something deeper?
Why isn't that already automatically implicit in doing backprop from D's output to D's input pixels / G's output? How would you train D to do that in any other way?
-
-
It is implicit but that doesn't mean equivalent. The signal could be a lot noisier. You can train AG with MCTS supervision, all of which feedback is 'implicit' in win/loss feedback, but the latter is still much slower/unstabler.
-
Did you have a particular proposal in mind? What came to my mind after 30sec was training G to complete arbitrary fractions of images including 100%, with both contiguous and discontiguous regions replaced, and then training D to judge pixel probabilities.
-
I hadn't thought about how exactly you'd train the D itself (rather than using it to train the G). Yes, inpainting regions of a real image would be one way... or noise/shuffle random pixels in just real images. Or assign all pixels in fake/real images arbitrary low/high constants
-
The last idea gets you constant predictions for all pixels, I expect. Training D this way seems like the obvious hard part of the problem, requiring cleverness and probably not working.
-
Possibly, yeah. On the other hand, REINFORCE and GANs don't seem like they should work either with crude average global feedback (and often don't). But it seems like getting richer supervision out of Ds is something obvious which someone should've tried but no one has...
-
I've seen "train D to output probabilities, but train G to match activations in pre-final layer(s) of D". Played myself with training D to estimate distance from image to reconstructed image in a VAEGAN, didn't help (when I clumsily tried) even though it's a more pixelwise idiom.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.