Is there an explanation somewhere about the decision to use RED (R,0,0,1) instead of INTENSITY (I,I,I,I) behavior for single-channel textures in GPU programming? It seems like the universally wrong method to standardize, but maybe I'm missing something?
-
-
It seems like a case where the default was chosen poorly, and I wanted to know whether I'm wrong about that.
Show this thread -
It is at least true in OpenGL - textures by default do R,0,0,1 for mono, and there is no way to have them default to R,R,R,R. You _must_ create samplers (at least) if you want that to happen. And I just doubt that hardware generally prefers R,0,0,1 to R,R,R,R...
Show this thread
End of conversation
New conversation -
-
-
One practical reason would be that at least one texture compression format (cough PVRTC) skimps on the precision for blue, so tying it to 0 (or 255) is better for 'monochrome'. Conversely, IIRC, ETC is better setting R==G==B. <shrug>
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.