Pasquale Minervini

@PMinervini

Researcher at University College London - , . Formerly,

London, England
Vrijeme pridruživanja: ožujak 2012.

Tweetovi

Blokirali ste korisnika/cu @PMinervini

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @PMinervini

  1. Prikvačeni tweet
    13. sij

    Our paper "Differentiable Reasoning on Large Knowledge Bases and Natural Language" is online! -- Together with an open-source neuro-symbolic reasoning framework, in TF: , w/

    Poništi
  2. proslijedio/la je Tweet

    Relational processing in the semantic domain is impaired in medial temporal lobe amnesia from the brilliant Margaret Keane, Mieke Verfaellie et al.

    Poništi
  3. proslijedio/la je Tweet
    prije 17 sati

    If your new algorithm for your ICML submission is too slow, this blog post is made for you:

    Poništi
  4. proslijedio/la je Tweet
    prije 19 sati

    François () is both an independent ML researcher investigating sparse efficient models and... an early angel investor in 🤗 Happy that he agreed to share some of his knowledge and experience on sparse models in a series of posts! First one is here

    Poništi
  5. The Missing Semester of Your CS Education: -- a collection of useful guides on all the tools (e.g. the shell, Vim, Git) that someone with a non-CS background may need in order to become much more productive

    Poništi
  6. proslijedio/la je Tweet
    3. velj

    104: and talk to us about model distillation, when you try to approximate a large model's decision boundary with a smaller model. After talking about the general area, we dive into DistilBERT.

    Poništi
  7. proslijedio/la je Tweet
    1. velj

    Mixtures are *A M A Z I N G*! They let you: 1) marginalize in O(k) (k = # components) if components can marginalize (eg. Gaussian) 2) approximate well enough *any* density for k → +∞ i.e., asymptotically...but in practice? How many Gaussians to fit the densities in the pic?

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    29. sij

    CLARIFY Kick-off meeting!!! Today the Consortium members meet 📍at Hospital Puerta de Hierro to start up the project and organize the work for the next months. 💪💪CLARIFY Team!

    Poništi
  9. proslijedio/la je Tweet
    2. velj

    Interesting work! Reminds me also of and the . A shameless advertisement of our own work on the "Automatic Bayesian Density Analysis", , with , , Zoubin Ghahramani, and

    Poništi
  10. proslijedio/la je Tweet
    1. velj

    "NNs work well, so we use them. This is an understandable reflex. In the enthusiasm with which the scientific community has embraced building an understanding of how they work is trailing behind" What a beautiful intro

    Poništi
  11. proslijedio/la je Tweet
    31. sij
    Poništi
  12. proslijedio/la je Tweet
    31. sij
    Poništi
  13. proslijedio/la je Tweet
    30. sij

    Anybody have a good source on how bird cognition works? (They have virtually no cortex, but corvids and parrots can rival primates in the complexity of tasks they learn.) Do birds have some equivalent of the hierarchical/filter-bank image processing structures mammals do?

    Poništi
  14. proslijedio/la je Tweet
    31. sij

    Given the paucity of annotated data, how can we perform sample-efficient generalization on unseen task-language combinations? Possible solution: a generative model of the neural parameter space, factorized into variables for several languages and tasks. 1/2

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    30. sij

    I think our "Neural Arithmetic Units" ICLR paper, provides a nice list of sanity checks when developing a new unit. Looking at: initialization, gradients, loss space, and redundant parameters, are generally important. I hope to see more of this :) -

    Poništi
  16. proslijedio/la je Tweet
    29. sij

    New blog post: Contrastive Self-Supervised Learning. Contrastive methods learn representations by encoding what makes two things similar or different. I find them very promising and go over some recent works such as DIM, CPC, AMDIM, CMC, MoCo etc.

    Poništi
  17. proslijedio/la je Tweet
    29. sij

    At we run a "Data and Knowledge Engineering" seminar every Monday. Check out the amazing list of speakers for the Spring semester, with a great mix of internal and external speakers from both industry and academia!

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    21. kol 2019.

    paper out! "most animal behavior is not the result of clever learning algorithms but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which must be compressed through a “genomic bottleneck”.

    Poništi
  19. 28. sij
    Poništi
  20. proslijedio/la je Tweet
    21. sij

    We are organizing an Workshop on Geometric and Relational Deep Learning! Registration invites will be shared soon. Interested in participating? Consider submitting an abstract or get in touch: w/

    Prikaži ovu nit
    Poništi
  21. proslijedio/la je Tweet
    27. sij

    Fully-funded PhD scholarships at : Application deadline: February 29. Start date: October. Contact me if interested in starting a PhD in or a related topic!

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·