Jakob Macke

@jakhmack

How do neural networks compute? Developing AI/ML tools for science and engineering; Prof. for Computational Neuroengineering ; I have my own views.

Munich, Bavaria
Vrijeme pridruživanja: travanj 2015.

Tweetovi

Blokirali ste korisnika/cu @jakhmack

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @jakhmack

  1. Prikvačeni tweet
    12. stu 2019.

    New: "Training deep neural density estimators to identify mechanistic models of neural dynamics” Nonnenmacher Bassetto . Our biggest project so far! Thread:

    Prikaži ovu nit
    Poništi
  2. 17. sij

    This looks super cool by will have to dig in! This looks like a great approach to unifying different models of decision making, and thereby embedding them in a unified framework for parameter-inference.

    Poništi
  3. 17. sij

    Congrats -- impressive to have a first-author computational neuroscience paper from a project started as undergrad and finished early in Masters!

    Poništi
  4. proslijedio/la je Tweet
    17. sij

    Proud and humbled to have published my very first project in comp. neuroscience. My supervisor Christian asked me, can we derive firing rate learning rules from STDP? Here you go Christian:

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    15. sij

    The MLSS is the best route for young researchers to finding facts and friends in the ML community. This year’s is back at its spiritual home in Tuebingen.

    Poništi
  6. 15. sij

    Fantastic topic, fantastic directors, fantastic faculty, fantastic location-- highly recommended!!!

    Poništi
  7. proslijedio/la je Tweet
    15. sij
    Poništi
  8. 14. sij

    ... and more on neural density estimators for identifying mechanistic models: added a new tutorial page, illustrating inference on a two-parameter Hodgkin Huxley model with SNPE step-by-step:

    Poništi
  9. 14. sij

    Exciting study by @teuerlab et al: They show how combing careful model design + prior constraints from literature + modern inference tools (our SNPE ) can yield detailed data-driven biophysical models for retinal neuroprosthetics!

    Poništi
  10. proslijedio/la je Tweet
    10. sij

    We have a new paper on , up now. I wanted to call it “Crushing the Hopfield limit”, but, sadly, I was overruled. Not much time today (flying to the , yay!), but in brief, WE CRUSH IT (the hopfield limit) (and in a cool cool way) (i’m excited) Ready?

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    17. pro 2019.

    This work is super similar to the likelihood free inference methods from 's group. (See their STG analysis for comparison. Really, everyone's favorite!) and

    Prikaži ovu nit
    Poništi
  12. 14. pro 2019.

    The at will be live-streamed today: Great Talks and Panel! My fantastic student Poornima Ramesh will be speaking at 15:00 (midnight German time...), on Adversarial Training of Neural Encoding Models,

    Poništi
  13. 9. pro 2019.

    2) ID in last layers is tightly correlated with test-performance 3) ID is much lower compared to 'linearized' representations with the same covariance-spectrum, suggesting that the representations lie on curved manifolds. Visit 's poster and discuss with him!

    Prikaži ovu nit
    Poništi
  14. 9. pro 2019.

    We used an estimator of intrinsic dimensionality (ID) to empirically study how ID changes across layers and networks. 1) ID has a characteristic 'hunchback shape' across different networks.

    Prikaži ovu nit
    Poništi
  15. 9. pro 2019.

    Ill not be at , but is presenting work that I helped with, "Intrinsic dimension of data representations in deep neural networks", Poster Tomorrow Tue 10:45-12:45, East Exhibition Hall B + C #169.

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet

    We have a PhD opening in my lab , . Reach out if you are interested in active sensing behavior, naturalistic tasks, computational modeling, POMDPs and IRL. Great people to interact with, not only in my lab but also our collaborators! Please RT!

    Poništi
  17. proslijedio/la je Tweet
    2. pro 2019.

    Have you or someone you know done great work to promote diversity in neuroscience? Is there a diversity-promoting project or initiative that you’d love to see get recognized? Please send in nominations for this prize of €2000 + free trip to FENS 2020!

    Poništi
  18. proslijedio/la je Tweet
    2. pro 2019.

    New Prize from the ALBA and FENS-Kavli networks to highlight a scientist or group that has made outstanding contributions to promoting equality and diversity in brain sciences. More info: Deadline: March 1st. NOMINATE!

    Poništi
  19. 19. stu 2019.

    Quick P.S. on : Why is the full posterior over networks needed? Setting individual parameters is not enough. Each parameter might be `in realistic range', but overall network not. Known eg. from , but sometimes ignored by brain simulations.

    , , i još njih 3
    Poništi
  20. 17. stu 2019.

    Dear toothbrush, If you were truly intelligent, would you not be able to find a more fun job than being a toothbrush?

    Poništi
  21. proslijedio/la je Tweet
    13. stu 2019.

    A must-read for all those working with models to study neuronal dynamics at the mechanistic level. Exciting approach for parameter tuning by et al.: Let DNNs do the work to find the right model parameters!

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·