Andrei Bursuc

@abursuc

Research scientist at on computer vision and machine learning; Inria alumni; PhD from Mines ParisTech. Opinions are my own

Paris, France
Vrijeme pridruživanja: studeni 2008.

Tweetovi

Blokirali ste korisnika/cu @abursuc

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @abursuc

  1. Prikvačeni tweet
    27. sij

    TRADI: Tracking deep neural network weight distributions -- work with G. Franchi We’re proposing a cheap method for getting ensembles of networks from a single network training 1/

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    11. kol 2019.

    An awesome matplotlib cheatsheet from Nicolas Rougier on the matplotlib mailing list.

    Prikaži ovu nit
    Poništi
  3. 3. velj

    A really fun summer school mixing talks from an all-star line-up followed by daily sports activities.

    Poništi
  4. proslijedio/la je Tweet
    1. velj

    This bears repeating: PhDs who end up working outside the academy (for whatever reason) are not failures. The academy has no monopoly on great and important work. There are great success to be had in other sectors. Enough already with the professional elitism.

    Poništi
  5. 30. sij

    I can imagine that the job of ACs is really challenging and making sure all papers get high quality reviews is not easy. Although, I'm sure most ACs are highly qualified scientists, if the decision is ultimately taken by a single person feels weird for peer review.

    Prikaži ovu nit
    Poništi
  6. 30. sij

    is quite a particular venue in terms of reviewing experience: submissions with all accepts ratings (score 6/8) can get rejected , while others with all rejects can get accepted 🙃

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    29. sij

    Virtual KITTI 2 is out! ***New release of synthetic dataset based on *** proxy virtual worlds , -object tracking, matching, estimation, ,

    Poništi
  8. 27. sij

    We evaluate our method on standard classification and regression benchmarks, and on out-of-distribution detection for classification and semantic segmentation, achieving competitive results. 4/

    Prikaži ovu nit
    Poništi
  9. 27. sij

    Here, we track the trajectory of the weights during optimization using Kalman filters, allowing us to compute distributions of the weights. We can then sample an ensemble of networks for estimating model uncertainty. 3/

    Prikaži ovu nit
    Poništi
  10. 27. sij

    Weights of a DNN are optimized from a random init towards an optimum value minimizing the loss. Only this final state of the weights is typically kept for testing, while the wealth of information on the geometry of the weight space, accumulated over the descent is discarded. 2/

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    22. sij

    Excited to share PCGrad, a super simple & effective method for multi-task learning & multi-task RL: project conflicting gradients On Meta-World MT50, PCGrad can solve *2x* more tasks than prior methods w/ Tianhe Yu, S Kumar, Gupta, ,

    Poništi
  12. 21. sij

    The recent demo from Mobileye on a 20 min continuous session on the streets of Jerusalem is quite impressive

    Poništi
  13. 17. sij

    reviews: witnessing the commoditization of ImageNet experiments in submitted papers. The amount of experiments that some authors manage to squeeze-in their papers is dazzling.

    Poništi
  14. 14. sij

    Join these fantastic lecturers in one of the most scenic places in the French Alps for 1 month. I wish I were a graduate student again :)

    Poništi
  15. proslijedio/la je Tweet
    10. sij

    Domain adaptation by enabling camera and lidar-based models to learn from each other. Work by et al.

    Poništi
  16. 7. sij

    This new augmented visualization system for fencing games is mesmerizing. Great work by and collaborators

    Poništi
  17. 6. sij

    Oh yeah! (back from holidays with a 5yo and 1yo)

    Poništi
  18. proslijedio/la je Tweet
    4. sij

    Wow, there is huge revolution on happening right now. More and more competitions are "kernel-only-submisson". Which means you should upload your model to cloud and then all the inference is done in submission kernel with runtime limit. Farewell 100500 models ensembles!

    Poništi
  19. 26. pro 2019.

    Don't miss this excellent defence by of the Bayesian approach for DNNs

    Poništi
  20. 19. pro 2019.

    We've released code, pre-trained models and configs for all experiments in our boosting few-shot learning with self-supervision work:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·