These are some really interesting test cases. Thanks for posting! Can you expand on the water/road confusion? Is there a particular picture you have in mind?
-
-
Replying to @citnaj
Roads and water look quite similar in the aerial with Westminster Palace (and in some places with the Crystal Palace surrounded by a large green area where the grass is colourized as light blue). I guess that's because they have similar shades of grey and texture details.
1 reply 0 retweets 1 like -
Replying to @SilviuVladPirvu @UrbanPlanningAI
Ahh got it! So I'll just throw out some speculation here- As far as I can tell, the ImageNet dataset doesn't really have images shot at this perspective. I'm pretty sure part of the contextual information that goes into how the model interprets the scenes is relative (cont)
1 reply 0 retweets 1 like -
Replying to @citnaj @UrbanPlanningAI
position and the general expectation that skies are up and water is down. I really think it boils down to that. So if you wanted it to perform well on images such as these, you'd want to supplement the dataset with aerial shots like this. I think that'd make a big difference.
1 reply 0 retweets 1 like -
Replying to @citnaj @UrbanPlanningAI
Honestly it'd probably greatly improve the look of the buildings and other things as well.
1 reply 0 retweets 1 like -
Replying to @citnaj
That would be good. I will have a closer look on how to enrich the dataset. I'm also thinking that some data augmentation can be done by using different pairs of b&w and colour... cont
1 reply 0 retweets 1 like -
Replying to @SilviuVladPirvu @citnaj
....e.g. Green/Red/Yellow Filter B&W etc as well as different vertical and horizontal distortions, vertical flipping transformations, multi cropping and then batch exported from Adobe Lightroom.
1 reply 0 retweets 1 like -
Replying to @SilviuVladPirvu @UrbanPlanningAI
Awesome. What I'd actually suggest is any data augmentation where your altering images be done on the fly, as an additional transform in this list found in the ColorizeTraining.ipynb: extra_aug_tfms = [RandomLighting(0.1, 0.1, tfm_y=TfmType.PIXEL)]
1 reply 0 retweets 1 like -
Replying to @citnaj @UrbanPlanningAI
You can see my BlackAndWhiteTransform as a simple example in http://transforms.py . It uses OpenCV. Admittedly OpenCV is a bit weirdly user unfriendly IMHO but it is faster. I like doing augmentations on the fly because then there's a lot more flexibility (cont)
1 reply 0 retweets 1 like -
Replying to @citnaj @UrbanPlanningAI
(cont) in terms of being able to experiment and to tweak. Like, I can turn the model into a DeFade model simply by replacing BlackAndWhiteTransform with RandomLighting, where I modify the input (but not target) with ridiculous contrast and brightness alterations.
1 reply 0 retweets 1 like
Oh keep in mind too- the http://Fast.AI has a lot of transforms (including the cropping and flipping you mention) in its own transforms file (fastai/transforms.py).
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.