Rezultati pretraživanja
  1. 18. lip 2019.

    Released at , MediaPipe is Google's new framework for media processing pipelines, combining model-based inference via TensorFlow with traditional CV tasks like optical flow, pose tracking, and more. Used in existing projects like Motion Stills.

  2. Ending my conference as a Crazy Rich Bayesian! Go Bayesians!

  3. Now available for commercial use on NGC, pix2pixHD developed by researchers generates high-resolution photorealistic images from high-level labels.

  4. 10. svi 2019.

    When video description meets bounding boxes! We are now releasing data/code/models/leaderboard (yes, everything) on our oral paper Grounded Video Description: Paper: Dataset (158k bboxes on 52k captions):

    Prikaži ovu nit
  5. 19. lip 2019.

    If you have any doubts about the applicability of CNNs to local feature detection for robust pose estimation, come to our Deep Charuco poster today at . This videos shows a shadow effect and traditional methods are simply too frail. Red=fail

  6. Best paper award at main idea: seeing around the corner at non-line-of-sight (NLOS) objects by using Fermat paths, which is a new theory of how NLOS photons follow specific geometric paths.

  7. 20. lip 2019.

    I liked the DeepMapping paper from Ding and Feng at . A bit similar to DIP, they use deep learning machinery to solve a surprising optimisation problem (no learning on a dataset): pose graph alignment for a set of pose scans.

  8. 15. lip 2019.

    Learn about Learning from Unlabeled Videos at , Sunday in Room E, 9:00am Fresh posters and keynotes: Antonio Torralba, Noah Snavely, Andrew Zisserman, Bill Freeman, Abhinav Gupta, Kristen Grauman

    Prikaži ovu nit
  9. 19. lip 2019.

    If you're at kindly visit me present our work on "Cycle Consistency for robust VQA" in Grand Ballroom at 14.10. My co-authors will be there to answer all your questions in the poster session #184 that follows. Dataset + Code + Paper:

  10. 17. lip 2019.

    If you enjoy 🍻 AND realtime visual-inertial SLAM, is hosting a happy hour tomorrow at 5pm. DM me for details!

    Prikaži ovu nit
  11. 17. lip 2019.

    This year Google is a proud Platinum Sponsor of , held in Long Beach, CA. If you’re attending, drop by the Google booth for some demos and chat with our researchers about their work on the field’s most interesting challenges! Learn more below!

  12. 24. srp 2019.

    We're launching a blog for sharing our latest work and reflections on research for safer and increasingly autonomous vehicles. The first post describes our recent paper ADVENT for unsupervised domain adaptation:

  13. 15. lip 2019.

    NVIDIA Research will present 20 papers at , including 11 orals. Full list is here: . Check it out!

  14. 19. lip 2019.

    Y. Niitani of Preferred Networks gave an oral presentation at "Sampling Techniques for Large-Scale Object Detection from Sparsely Annotated Objects" used to win 2nd prize @ Google AI Open Images challenge (w/ Akiba, Kerola, Ogawa, Sano & Suzuki)

  15. 17. srp 2019.

    Deep Flow-Guided Video Inpainting By Completing a missing flow is easier than filling in pixels of a missing region directly. SoTA on DAVIS and YouTube-VOS Code ArXiv

  16. 18. lip 2019.

    At ? Visit the Google booth at 10:15 to learn about MediaPipe (), an open source framework for building machine learning pipelines, along with the many methods submitted to the recent Challenge on Learned Image Compression ()

  17. The Computer Vision team is seeking PhD students and recent graduates with a background in research and engineering. Create and build state-of-the-art AI in the areas of computer vision, NLP, and ML as a Software Engineer:

  18. For those interested in some of the latest self-driving research (especially those participating in our competition), we’ve collected our favorite self-driving content from in our latest reader's digest. Read them here:

  19. 14. srp 2019.

    "A General and Adaptive Robust Loss Function" by Jonathan T. Barron

  20. 【CVPR 2019 速報】を更新しました。230ページの内容になりました! CVPR 2019 論文サマリはこちらをご覧ください

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.