One of the most interesting aspects of the solution was using high lr as regularization. I fine tuned large architectures on 7k of images for as many as 120 epochs with a batch size of 42 and lr of 0.1 using the 1cycle policy without overfitting (multiple 30 epoch cycles)
-
-
এই থ্রেডটি দেখান
-
The final solution consists of a couple of archs fine tuned in various ways (different things worked for different archs). I ran xgboost on the predictions and combined all the outputs on the GPU using heuristics based on a couple of papers to optimize the micro averaged f1.
এই থ্রেডটি দেখান -
The extent to which the
@fastdotai lib and@PyTorch were instrumental to this is beyond what can be expressed in a tweet. I usually was minutes away from trying something new and both gave me amazingly engineered building blocks to combine / extend.এই থ্রেডটি দেখান -
But enough of making statements based on perceptions! Time to look at some data
Here is model performance on the val set. Not indicative of arch performance in general but carries some info on the ability to train said models using the tools I used on a single 1080tipic.twitter.com/VKWVxbmKih
এই থ্রেডটি দেখান -
I wrote a few more words that you can read on the Kaggle forums here https://www.kaggle.com/c/imaterialist-challenge-fashion-2018/discussion/57944 …. Happy to answer any questions that you might have!
এই থ্রেডটি দেখান
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
Thanks for sharing your experience, Radek, and congrats on the success! Can you share a bit more as to how you used the fastai course? I am aware that it's recommended that you try to reproduce the notebooks. But, what was _your_ game plan, pace, and method for learning? Thanks!
-
I tried following
@jeremyphoward's advice. As nothing else seemed to work attempting something completely different seemed like a reasonable thing to do. I started to tweet. I started to blog https://medium.com/@radekosmulski . I participated in discussions on the forums. I also worked -
on the notebooks until they felt they were 'mine'. This ofc is not a statement about the ownership of anything but an nb I spent enough time working on has a completely different feel to it that I don't know how to describe. You can find some of them here https://github.com/radekosmulski/machine_learning_notebooks ….
-
I came up with mini projects to work on like this one https://github.com/radekosmulski/presidential …. But to my mind steeped in 12+ years of schooling this does not seem like learning. Maybe more like playing. But what happened to the glory days when I could write out the derivatives for everything
-
that happens in a CNN or an RNN but when I would draw a blank if confronted with a csv of data? I still don't see what I did as learning but as I suspect it maybe works and I seem to know what to do now I guess I'll continue doing it for a while longer and see what happens

-
BTW I also read this book https://www.amazon.com/Mind-Numbers-Science-Flunked-Algebra-ebook/dp/B00G3L19ZU/ref=sr_1_1?ie=UTF8&qid=1527840195&sr=8-1&keywords=a+mind+for+numbers … by
@barbaraoakley. The title is not indicative of the content
It provides a rundown of how learning works and it's nothing like most of us would expect. There is a MOOC associated with it https://www.coursera.org/learn/learning-how-to-learn … but I
book > -
Thanks so much for sharing. You're an inspiration!
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
How beginner you are when gave up sir
-
I am not sure there is a good answer to this question as what generally people have on their mind when they say 'background' can be detrimental to picking up new skills. I am not sure you can have too little math or CS background to teach your computer how to be very good at
-
telling cats from dogs, but I am sure you can have too much! This is a real honest answer. You can try to compensate for having a background with humility and an open mind but these tend to be not as reliable it seems.
-
Why would knowing math be bad? Maybe if combined with an inflexible mind
-
It isn't
As long as you don't think that knowing more math is the best / only way to become better at the trillion of fields that people seem to hold such a belief about. I know I did and this nearly led me to becoming absolutely useless at a field I seem to care deeply about. -
As a side note, I enjoy math nearly as much as I do ML.
-
I think I probably misunderstood you then, sorry about that and my slightly snarky tone
-
no worries at all
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
লোড হতে বেশ কিছুক্ষণ সময় নিচ্ছে।
টুইটার তার ক্ষমতার বাইরে চলে গেছে বা কোনো সাময়িক সমস্যার সম্মুখীন হয়েছে আবার চেষ্টা করুন বা আরও তথ্যের জন্য টুইটারের স্থিতি দেখুন।