-
-
-
Thanks i will have a look
End of conversation
New conversation -
-
-
@fchollet thank you very much ! very useful as always ! are you planning on releasing a keras example on how to finetune a bert based language model ? I would be very interested ;) -
@benbenfiol You can take a look into this github repo:https://github.com/CyberZHG/keras-bert … - Show replies
New conversation -
-
-
Thanks
@fchollet and@sidd2006. But it seems based on the validation loss that not much is being learnt past the first epoch. Any ideas why?pic.twitter.com/476xSilFEr
-
Note that I am using Cross Entropy (CE) as the loss function (suggested by
@fchollet) on normalized ratings. CE loss is more stable generally (log scale). MSE is trained with actual ratings as targets and interpolating sigmoid output to the ratings scale : min + x * (max-min).pic.twitter.com/5SDYhGzFs8
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.