is anything more than just a collection of textures?
-
-
-
Thank you, galaxy brain.
-
Slightly more seriously, a collection can be chopped up and mixed around and still be the same texture, even if the relations between all the parts are completely scrambled.
End of conversation
New conversation -
-
-
That being said, this figure showing pixels with high class-evidence doesn't look like texturepic.twitter.com/D417WMeHND
-
Looks more like edge detection?
End of conversation
New conversation -
-
-
They might, if they are allowed to. The fact that you can do this for me reflects more poorly on ImageNet than ConvNets
-
I wonder how well humans do? We're pretty good at recognizing closeups where most of an object is obscured.
-
Put another way, I wonder to what extent it's true that, to paraphrase
@pfau: "humans are basically just good at learning to classify local textures". -
To me that would be at odds with how effortless it is to eg watch cartoons, interpret art, etc. that said, I’ve seen a few examples where ConvNets also possess this ability but to a lower extent, I suspect because they are allowed to be lazy in their training data.
-
Ever thought of creating something similar to reCaptcha for Tesla to increase training data. Perhaps, a game Tesla fans can play to increase training data?
-
It’s amusingly common to look down on labeling as something anyone could do on spot, but in fact requires trained professionals and an extensive understanding of documentations to do correctly. Otherwise “garbage in garbage out”.
-
Makes sense, I’m guessing my idea of training data is similar to many. Pictures labeled: cat, stop sign, human, stop light (green), stop light (yellow), etc. Makes sense that it is far more complicated than that. Thanks for the reply!
End of conversation
New conversation -
-
-
"there exists an X such that Y" != "for all X, Y." Basic logic people! Note also (as
@dribnet shows), "There exists an X and not Y" is sufficient to disprove the second. The prominence of sensationalism over sound reasoning on machine learning/AI topics is exhausting. -
My qualifier of "basically" means my statement is better understood in a modal logic, tho.
-
No, your qualifier is in the wrong spot for that to lessen the claim vs. my criticism: "Neural networks [basically]" vs. "[Basically all] neural networks"
-
And an N of one or small N does not justify any similar variants of that claim.
-
And perhaps "textures" is also poorly defined here. For example, I think the patches in the paper are not rotationally invariant, though I informally consider "textures" to not have a particular orientation.
-
research idea: hook this bag-of-features architecture to a VQA task and evaluate how well the system does answering questions seemingly requiring global information (eg: "is the cat under the couch?"). I'd bet it does better than expected.
End of conversation
New conversation -
-
-
@dribnet will not agree -
indeed - it is provably false.https://twitter.com/dribnet/status/987926823366418432 …
-
and here's that same example one year later now generalizing to 13 different architectures. is there any other reasonable hypothesis how for these results could continue to work on new neural architectures?pic.twitter.com/wI59L8hd3X
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.