Excited to share our work on self-supervised learning in videos. Our method, temporal cycle-consistency (TCC) learning, looks for similarities across videos to learn useful representations.#CVPR2019 #computervision
Video: https://www.youtube.com/watch?v=iWjjeMQmt8E …
Webpage: https://sites.google.com/corp/view/temporal-cycle-consistency/ …pic.twitter.com/v02Vckd7LY
-
Prikaži ovu nit
-
For a frame in video 1, TCC finds the nearest neighbor (NN) in video 2. To go back to video 1, we find the nearest neighbor of NN in video 1. If we came back to the frame we started from, the frames are cycle-consistent. TCC minimizes this cycle-consistency error.pic.twitter.com/9dUqwI4Ao0
1 reply 0 proslijeđenih tweetova 4 korisnika označavaju da im se sviđaPrikaži ovu nit -
ML highlights from the paper: 1. Cycle-consistency loss applied directly on low dimensional embeddings (without GAN / decoder). 2. Soft-nearest neighbors to find correspondences across videos. Training method:pic.twitter.com/GnD6jw9ZSX
1 reply 0 proslijeđenih tweetova 5 korisnika označava da im se sviđaPrikaži ovu nit -
TCC discovers the phases of an action without additional labels. In this video, we retrieve nearest neighbors in the embedding space to frames in the reference video. In spite of many variations, TCC maps semantically similar frames to nearby points in the embedding space.pic.twitter.com/k4o4y4o6gE
1 reply 0 proslijeđenih tweetova 3 korisnika označavaju da im se sviđaPrikaži ovu nit -
Self-supervised methods are quite useful in the few-shot setting. Consider the action phase classification task. With only 1 labeled video TCC achieves similar performance to vanilla supervised learning models trained with ~50 videos.pic.twitter.com/Xu26Tpr68y
1 reply 0 proslijeđenih tweetova 0 korisnika označava da im se sviđaPrikaži ovu nit -
Some applications of the per-frame embeddings learned using TCC: 1. Unsupervised video alignmentpic.twitter.com/bAMpiOIRwd
1 reply 0 proslijeđenih tweetova 2 korisnika označavaju da im se sviđaPrikaži ovu nit -
2. Transfer of annotations/modalities across videos.https://youtu.be/ATDGVqX3INo
1 reply 0 proslijeđenih tweetova 0 korisnika označava da im se sviđaPrikaži ovu nit -
3. Fine-grained retrieval using any frame of a video.pic.twitter.com/KX69YtNPMp
1 reply 0 proslijeđenih tweetova 1 korisnik označava da mu se sviđaPrikaži ovu nit
This is joint work with @yusufaytar , Jonathan Tompson, @psermanet and Andrew Zisserman.
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.