At the risk of getting into a Twitter battle, I feel the need to point this out: One example of a model generated image going awry doesn't justify concluding it's a case for dataset or model bias. That's ironically a quite biased thing to do.
-
Show this thread
-
Such accusations carry a pretty heavy weight to them for the people involved. Be responsible about making them.
1 reply 0 retweets 14 likesShow this thread -
Replying to @citnaj
The other way around also works: cherry picked results which show that your algorithm works. N=1 is always unscientific and often dumb.
2 replies 0 retweets 1 like
Replying to @mSchmitz_
Yeah that's very true. It's actually seems pretty normal too to see basically that in papers, and it gets a pass. Really weird.
12:55 PM - 3 Jul 2020
0 replies
0 retweets
1 like
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.