Sylvain

@sylvain_gelly

ML Research @ Google Brain Zurich

Vrijeme pridruživanja: svibanj 2009.

Tweetovi

Blokirali ste korisnika/cu @sylvain_gelly

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @sylvain_gelly

  1. 6. sij

    Will 2020 see more pretrained representation and transfer works? This simple (but large scale) BiT approach is quite effective on a wide number of datasets. You can see here *all* top-1 errors on CIFAR10 test. VTAB testbed is improved but still challenging

    Poništi
  2. proslijedio/la je Tweet

    An update of our paper investigating object compositionality in GANs is now available: We show how a structured generator that learns about objects can facilitate unsupervised instance segmentation. w/ 1/4

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    27. stu 2019.

    Using our Semantic Bottleneck GAN (SB-GAN), achieve SOTA results in synthesizing complex scenes from scratch as well as from real semantic layouts. Joint work with ,Eric Tzeng,, Trevor Darrell,

    Poništi
  4. proslijedio/la je Tweet
    27. stu 2019.

    In our recent collaboration with we show how to generate realistic complex scenes from scratch! While the problem is extremely challenging, we show how to achieve SOTA in unconditional generation and improve conditional generation using SPADE

    Poništi
  5. proslijedio/la je Tweet
    26. stu 2019.

    We are happy to announce the v2.0 release of the Google Research Football Environment. The most exciting feature of this release is the Game Server, which lets your agent compete online with other researchers' models. Visit and give it a try!

    Poništi
  6. proslijedio/la je Tweet
    6. stu 2019.

    We’re pleased to release the Visual Task Adaptation Benchmark (VTAB), a diverse, realistic, and challenging protocol to measure progress towards universal visual representations. Learn all about it below.

    Poništi
  7. 17. lis 2019.

    Are you interested in Representation Learning, Transfer Learning, Domain Adaptation, Self-Supervised Learning or Semi-Supervised Learning? Have a look at this work from Google Brain Zurich!

    Poništi
  8. proslijedio/la je Tweet
    1. kol 2019.

    Excited about recent progress in self-supervised representation learning based on mutual information maximisation? Mutual Information might not be the key ingredient for the success of these methods, as shown in our latest paper:

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    24. lip 2019.

    Code released to adapt BERT using few parameters. Can be used to adapt one model to many tasks. Catastrophic forgetting not included.

    Poništi
  10. proslijedio/la je Tweet
    11. lip 2019.

    Congratulations to the Google, and authors of "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations" (), recipient of an Best Paper Award! Learn more in the blog post at .

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    7. lip 2019.

    Check out a novel environment where agents aim to master the world’s most popular sport—football! The Google Research Football Environment includes benchmarks & progressive RL training scenarios, and is available in open source beta→

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    28. svi 2019.

    Quantitative evaluation of generative models is a key research challenge. In our latest work, we introduce a theoretical framework which uncovers the existing approaches based on precision and recall as special cases, and offers novel geometric insights.

    Poništi
  13. proslijedio/la je Tweet
    10. svi 2019.

    Want to turn your self-supervised method into a semi-supervised learning technique? Check out our S⁴L framework ()! Work done at with , and .

    Poništi
  14. proslijedio/la je Tweet
    29. tra 2019.

    Our work on "High-Fidelity Image Generation With Fewer Labels" has been accepted to ICML'19! Thanks to the reviewers and the area chairs for the thorough reviews!

    Poništi
  15. proslijedio/la je Tweet
    24. tra 2019.

    Learning disentangled representations of a scene is critical for many machine vision tasks. In collaboration with and , we present a broad examination of the field, examine the role of implicit biases and provide direction for future research.

    Poništi
  16. proslijedio/la je Tweet

    Check out some research exploring a new approach to training conditional generative adversarial networks (GANs) that reduces the amount of labeled data required by a factor of ~10 (along with an update to the Compare GAN library!). Learn more at ↓

    Poništi
  17. proslijedio/la je Tweet
    11. ožu 2019.

    A year ago successfully training GANs on ImageNet without labels seemed out of reach. Now, we can obtain samples such as the ones below. It's amazing what increased compute + new insights/techniques can achieve.

    Poništi
  18. proslijedio/la je Tweet
    7. ožu 2019.

    Self-supervision + clustering + BigGAN = FID 22.0 for image synthesis on ImageNet without labels. Check out all the samples in our full paper at .

    Poništi
  19. proslijedio/la je Tweet
    7. ožu 2019.

    How to train SOTA high-fidelity conditional GANs usin 10x fewer labels? Using self-supervision and semi-supervision! Check out our latest work at

    Poništi
  20. proslijedio/la je Tweet
    4. ožu 2019.

    In collaboration with and , we have open sourced the code for the ICLR'19 paper "Episodic Curiosity through Reachability". Check out the paper (including new locomotion experiments!) at and the code at .

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·