And then I have both the Jupyter notebook “test” image suites and then a batch of 100s of more public domain images. It’s painful and tedious but using my eyes on these directly I’ve found is the only reliable way I have to measure progress. And even the you have to be extremely
-
-
Show this thread
-
careful about perception issues- you misremember stuff easily, and literally see things differently depending on context. So I don’t necessarily take a first go at it as the final say as to whether or not the model is great or not.
Show this thread -
And yes this process drives me nuts and please tell me if there’s a better way lol
Show this thread
End of conversation
New conversation -
-
-
There isn’t a better way. Or if there is, I haven’t found it in like 8 years of graphics research. Even in the most rigorous academic circles, the test used in most cases is “show these images to a bunch of humans and see how many can spot differences.”
-
And as you say, the viewing conditions matter a lot. *Especially* for color. FWIW, I used an i1DisplayPro for calibrating my U2711 monitor. Back in 2012 that was pretty much top of the line calibration+monitor combo for digital work.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.