Excited to share our work on self-supervised learning in videos. Our method, temporal cycle-consistency (TCC) learning, looks for similarities across videos to learn useful representations.#CVPR2019 #computervision
Video: https://www.youtube.com/watch?v=iWjjeMQmt8E …
Webpage: https://sites.google.com/corp/view/temporal-cycle-consistency/ …pic.twitter.com/v02Vckd7LY
-
-
TCC discovers the phases of an action without additional labels. In this video, we retrieve nearest neighbors in the embedding space to frames in the reference video. In spite of many variations, TCC maps semantically similar frames to nearby points in the embedding space.pic.twitter.com/k4o4y4o6gE
Prikaži ovu nit -
Self-supervised methods are quite useful in the few-shot setting. Consider the action phase classification task. With only 1 labeled video TCC achieves similar performance to vanilla supervised learning models trained with ~50 videos.pic.twitter.com/Xu26Tpr68y
Prikaži ovu nit -
Some applications of the per-frame embeddings learned using TCC: 1. Unsupervised video alignmentpic.twitter.com/bAMpiOIRwd
Prikaži ovu nit -
-
-
This is joint work with
@yusufaytar , Jonathan Tompson,@psermanet and Andrew Zisserman.Prikaži ovu nit
Kraj razgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.