Mario Lucic

@MarioLucic_

Senior Research Scientist at (Brain Team). PhD in CS from Zurich.

Vrijeme pridruživanja: prosinac 2018.

Tweetovi

Blokirali ste korisnika/cu @MarioLucic_

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @MarioLucic_

  1. Prikvačeni tweet
    11. lip 2019.

    I'm very happy to announce that we have received the Best Paper Award!

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    29. sij

    Machine Learning Summer School 2020 is in Tuebingen, Germany! Please apply. Deadline: 11 Feb 2020.

    Poništi
  3. proslijedio/la je Tweet
    6. sij

    We distill key components for pre-training representations at scale: BigTransfer ("BiT") achieves SOTA on many benchmarks with ResNet, e.g. 87.8% top-1 on ImageNet (86.4% with only 25 images/class) and 99.3% on CIFAR-10 (97.6% with only 10 images/class).

    Prikaži ovu nit
    Poništi
  4. 20. pro 2019.

    Our work which critically investigates the role of mutual information in self-supervised representation learning was accepted to . w/ J. Djolonga

    Poništi
  5. proslijedio/la je Tweet

    We have just launched the AutoTrain challenge at Submit an optimizer achieving target test performance on a wide variety of (unknown) models/tasks without human tweaking 🧙🏻‍♂️ Get started here:

    Poništi
  6. proslijedio/la je Tweet

    Refreshingly honest take on hype cycle around the Neural ODE paper by one of authors presented this year at , which won a best paper award at last year. Video:

    Poništi
  7. 13. pro 2019.

    It was great collaborating with you Paul, happy to hear you had a great time!

    Poništi
  8. proslijedio/la je Tweet

    An update of our paper investigating object compositionality in GANs is now available: We show how a structured generator that learns about objects can facilitate unsupervised instance segmentation. w/ 1/4

    Prikaži ovu nit
    Poništi
  9. 27. stu 2019.

    In our recent collaboration with we show how to generate realistic complex scenes from scratch! While the problem is extremely challenging, we show how to achieve SOTA in unconditional generation and improve conditional generation using SPADE

    Poništi
  10. proslijedio/la je Tweet
    27. stu 2019.

    We have open postdoctoral fellowship positions in the Foundations of Data Science program at .

    Poništi
  11. proslijedio/la je Tweet
    26. stu 2019.

    We are happy to announce the v2.0 release of the Google Research Football Environment. The most exciting feature of this release is the Game Server, which lets your agent compete online with other researchers' models. Visit and give it a try!

    Poništi
  12. proslijedio/la je Tweet
    24. stu 2019.

    Google AI Residency 2020 applications are open, with positions in many different locations including Europe and Africa. A fantastic program designed to jumpstart a career in Machine Learning. Apply at before Dec. 19, 2019.

    Poništi
  13. proslijedio/la je Tweet
    21. stu 2019.

    We've looked into representation learning for with different datasets and fine-tuning using in-domain data. See paper with datasets and models included 🔋: with , and .

    Poništi
  14. proslijedio/la je Tweet
    14. stu 2019.

    Already tackling some of the deep questions in AI ethics. This symposium is off a great start!

    Poništi
  15. proslijedio/la je Tweet
    6. stu 2019.

    We’re pleased to release the Visual Task Adaptation Benchmark (VTAB), a diverse, realistic, and challenging protocol to measure progress towards universal visual representations. Learn all about it below.

    Poništi
  16. proslijedio/la je Tweet
    1. stu 2019.
    Poništi
  17. proslijedio/la je Tweet
    29. lis 2019.

    Excited that our paper "Are Disentangled Representations Helpful for Abstract Visual Reasoning?" () is accepted to with code released at . With , & .

    Poništi
  18. proslijedio/la je Tweet

    At ICCV and curious about semi-supervised learning with self-supervision? Come to our talk today at 15:20 in Hall D1 or chat with us at poster #20 from 15:30 to 18:00. , Code: . Joint work with and

    Poništi
  19. 17. lis 2019.

    How hard is it to learn a representation that transfers to ~20 downstream tasks in the small sample size regime? This paper evaluates the performance of generative (GANs, VAEs), self-supervised, as well as fully supervised models and their hybrids.

    Poništi
  20. proslijedio/la je Tweet
    2. lis 2019.
    Poništi
  21. proslijedio/la je Tweet
    1. lis 2019.

    I am looking for a postdoc in the general area of (RL), including RL, inverse RL, and imitation learning. prior publications in the topic are needed. Start date is negotiable but early next year is the target. please retweet.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·