Tweetovi

Blokirali ste korisnika/cu @debidatta

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @debidatta

  1. Prikvačeni tweet
    17. tra 2019.

    Excited to share our work on self-supervised learning in videos. Our method, temporal cycle-consistency (TCC) learning, looks for similarities across videos to learn useful representations. Video: Webpage:

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    28. sij

    Enabling people to converse with chatbots about anything has been a passion of a lifetime for me, and I'm sure of others as well. So I'm very thankful to be able to finally share our results with you all. Hopefully, this will help inform efforts in the area. (1/4)

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    28. sij

    Check out Meena, a new state-of-the-art open-domain conversational agent, released along with a new evaluation metric, the Sensibleness and Specificity Average, which captures basic, but important attributes for normal conversation. Learn more below!

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    24. lis 2019.

    Excited to announce our new work! "gradSLAM: Dense SLAM meets automatic differentiation" We leverage the power of autodiff frameworks to make dense SLAM fully differentiable. Paper: Project page: Video:

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    23. kol 2019.

    Our paper "EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition" now on Arxiv. With and and AZ. Video: Arxiv: Project:

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    11. srp 2019.

    My new work with on accelerated training of sparse networks from random weights to dense performance levels — no retraining required! Paper: Blog post: Code:

    Poništi
  7. proslijedio/la je Tweet
    21. lip 2019.

    😎And now, 2.0 is published! 3D Markerless pose estimation of user-defined points across any species. Full step-by-step guide, Notebooks, & more! 👇🥳co-1st: &

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    24. lip 2019.

    Great post on the intuition behind the transformer. I hadn't ever thought about how the CNN could be viewed as a special case of a transformer!

    Poništi
  9. proslijedio/la je Tweet
    29. tra 2019.

    It turns out that YouTube has tons of videos of people pretending to be statues. This is great for learning about the 3D shape of people! Cool new work from at CVPR19 from his Google internship.

    Poništi
  10. proslijedio/la je Tweet
    27. tra 2019.

    My slides from the robotics symposium, the main message is self-supervision on lots of unlabeled play data is an effective recipe for robotics, and we propose multiple methods to implement this recipe for vision and control:

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    27. tra 2019.

    Our new paper on fast and robust animal pose estimation methods—developed in our quest to understand how animals sync and swarm—now available on ! Code will be available soon. Keep an eye out for DeepPoseKit at

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet

    New blog post: "A Recipe for Training Neural Networks" a collection of attempted advice for training neural nets with a focus on how to structure that process over time

    Poništi
  13. 17. tra 2019.

    This is joint work with , Jonathan Tompson, and Andrew Zisserman.

    Prikaži ovu nit
    Poništi
  14. 17. tra 2019.

    3. Fine-grained retrieval using any frame of a video.

    Prikaži ovu nit
    Poništi
  15. 17. tra 2019.

    2. Transfer of annotations/modalities across videos.

    Prikaži ovu nit
    Poništi
  16. 17. tra 2019.

    Some applications of the per-frame embeddings learned using TCC: 1. Unsupervised video alignment

    Prikaži ovu nit
    Poništi
  17. 17. tra 2019.

    Self-supervised methods are quite useful in the few-shot setting. Consider the action phase classification task. With only 1 labeled video TCC achieves similar performance to vanilla supervised learning models trained with ~50 videos.

    Prikaži ovu nit
    Poništi
  18. 17. tra 2019.

    TCC discovers the phases of an action without additional labels. In this video, we retrieve nearest neighbors in the embedding space to frames in the reference video. In spite of many variations, TCC maps semantically similar frames to nearby points in the embedding space.

    Prikaži ovu nit
    Poništi
  19. 17. tra 2019.

    ML highlights from the paper: 1. Cycle-consistency loss applied directly on low dimensional embeddings (without GAN / decoder). 2. Soft-nearest neighbors to find correspondences across videos. Training method:

    Prikaži ovu nit
    Poništi
  20. 17. tra 2019.

    For a frame in video 1, TCC finds the nearest neighbor (NN) in video 2. To go back to video 1, we find the nearest neighbor of NN in video 1. If we came back to the frame we started from, the frames are cycle-consistent. TCC minimizes this cycle-consistency error.

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·