Is there an explanation somewhere about the decision to use RED (R,0,0,1) instead of INTENSITY (I,I,I,I) behavior for single-channel textures in GPU programming? It seems like the universally wrong method to standardize, but maybe I'm missing something?
That is not what we are talking about. We are talking about the case where In the shader code, _all four_ are used. Then somebody binds a monochrome texture. What happens? The shader is (typically) not recompiled at all, and the texture unit must supply that values.
-
-
It actually seems like maybe everyone but Intel (suggests Tom?) does this properly anyway, so my question is even more relevant, which is why the default was chosen (IMO) to be one that is never wanted.
-
As far as I know everyone has the various swizzles needed to meet the API specs which now include arbitrary swizzles. See my other tweet for a suggestion for why the default is the way it is.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.