idle: does anyone know a good way to generate decent depth-map from a single (mono) camera "in the general case" & and not eat CPU?...
-
-
If you just care about Quake case though you can cheat by knowing the set of textures and their scale.
-
was more hoping for a "general case"; had found that a useful feature of the webcam images did not apply to Quake, and the NN's did little.
-
namely, the webcam images had non-uniform entropy balance, with the balance loosely corresponding with distance (closer=softer/fuzzier).
-
in contrast, the Quake images had fairly uniform entropy regardless of distance, and seemingly no other distance correlated properties...
-
Measure instead size of regions corresponding to a single texture pixel. ~1/z.
-
possible. I was mostly using Quake as a test, as I could easily hack it to record both f-buf and z-buf, & use it to train some neural nets.
-
my prior webcam thing didn't use NNs though, but was hoping NN's could find a more general solution. NN fail video:https://www.youtube.com/watch?v=-3qcKgv7BeU …
-
but, I have noted that most of my attempts at using NN's for non-trivial tasks have tended to fail (though simpler tasks work better...).
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.