Danilo Bzdok

@danilobzdok

Human-defining cognition, brain health & machine learning. MIA: . TEDx: . OHBM: .

McGill & Mila, Montreal
Vrijeme pridruživanja: veljača 2014.

Tweetovi

Blokirali ste korisnika/cu @danilobzdok

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @danilobzdok

  1. Prikvačeni tweet
    3. tra 2018.
    Poništi
  2. proslijedio/la je Tweet
    2. velj

    Wow: Google's "Meena" chatbot was trained on a full TPUv3 pod (2048 TPU cores) for **30 full days** - That's more than $1,400,000 of compute time to train this chatbot model. (! 100+ petaflops of sustained compute !)

    Poništi
  3. proslijedio/la je Tweet
    29. sij

    We developed a new participation coefficient (PC) that explicitly accounts for the influence of module size on PC. Just out in Work led by , together with , and G. Jackson. Well done Mangor!

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    1. velj

    congrats to , thanks to , out now: The physiological effects of noninvasive brain stimulation fundamentally differ across the human cortex supported by

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    2. velj
    Poništi
  6. proslijedio/la je Tweet
    31. sij

    Artificial intelligence could successfully identify gender identity subtypes from brain imaging data.

    Poništi
  7. proslijedio/la je Tweet
    31. sij
    Odgovor korisnicima

    Does anyone in 2020 believes that linear regr can be used for causal discovery?

    Poništi
  8. proslijedio/la je Tweet
    31. sij
    Odgovor korisnicima i sljedećem broju korisnika:

    I am not so sure. Why? Because all my attempts to convince to show us how he solves a simple causal problem (one whose solution is known in advance) ended up in failure. I lost my charm. Simpson's would be a good example

    Poništi
  9. 31. sij

    Hot take: today, little consensus on how to do emphasizes “qualitative” graphical model based approach emphasizes latent factor analysis for ‘surrogate causes’ : linear regr for “quantitative” causal discovery 👉other results !

    Poništi
  10. proslijedio/la je Tweet
    30. sij

    Pandas 1.0 is here! * Read the release notes: * Read the blogpost reflecting on what 1.0 means to our project: * Install with conda / PyPI: Thanks to our 300+ contributors to this release.

    Poništi
  11. proslijedio/la je Tweet
    30. sij

    Out now in : fantastic work by Jeremy Moreau in the lab, in collab. with neurosurgeon extraordinaire Roy Dudley : from 60,000+ cases, to the prediction of outcome, and a mobile app. More on that soon.

    Poništi
  12. 28. sij

    du jour: to approximate intractable posterior distributions can be viewed to operate in 3 consecutive phases: 1: after random start, get close to typical set 2: roughly explore the probability mass of typical set 3: convergence to stationarity

    Poništi
  13. proslijedio/la je Tweet
    28. sij

    Our article on gene expression trajectories and neuropathology in neurodegeneration is finally out, in journal. New machine-learning method to analyze long-term molecular mechanisms in disease, and perform individual disease stating

    Poništi
  14. proslijedio/la je Tweet
    26. sij
    Poništi
  15. 27. sij

    Tomas Paus at lifespan research symposium in Copenhagen today: “Most people in dMRI study the major white-matter bundles. But these only make up 5-8% of the entire axons in the brain, most of which do not leave the cortex.”

    Poništi
  16. 27. sij

    Tomas Paus at lifespan research symposium in Copenhagen today: “In population neuroscience, I put my money on structural brain scans” 1) assesses whole brain 2) restingstate is a big blackbox 3) high retest reliability 4) we target traits, not states

    Poništi
  17. proslijedio/la je Tweet
    26. sij

    I had missed this when it came out 2 months ago. Severe allegations raised against the book "Why We Sleep”, including data manipulation. Don't know enough about the topic to vet claims but appears quite thorough. Thoughts on this?

    Poništi
  18. 26. sij

    Andrew Ng Interview with Geoffrey Hinton Makes evident how much grit and perseverance it takes to generate and defend truly novel research ideas h/t:

    Poništi
  19. 25. sij

    metathought: Frequentists ‘fit’ model parameters (like centroids in k-means) by estimating one best value for them. Bayesians rather ‘infer a posterior’, rather than fitting, model parameters (like latent Dirichlet allocation)

    Poništi
  20. 24. sij

    thought: Markov Decision Processes (MDPs) essentially come down to hidden Markov models augmented with latent parameter matrices for action policies and value appraisal

    Poništi
  21. 23. sij

    “The sample size of highly cited experimental fMRI studies increased at a rate of 0.74 participant/year”

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·