Christian Gerloff _/˅¯\__/\_ ̷ ˅¯\_

@chg_ai

PhD candidate fascinated by & computational statistics for: neuroscience   CoFounder & CTO 

Vrijeme pridruživanja: studeni 2017.

Tweetovi

Blokirali ste korisnika/cu @chg_ai

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @chg_ai

  1. 20. pro 2019.

    Interested in how neuroscience can inspire better AI? Come to NAISys, March 24-28 Abstracts (1 page) due Jan 10 registration {Please RETWEET ME}

    Prikaži ovu nit
    Poništi
  2. 13. pro 2019.
    Poništi
  3. 6. pro 2019.

    Check out our extensive review paper on normalizing flows! This paper is the product of years of thinking about flows: it contains everything we know about them, and many new insights. With , , , . Thread 👇

    Prikaži ovu nit
    Poništi
  4. 5. pro 2019.

    Our paper on best practices for establishing evidence for prediction is now available without login

    Poništi
  5. Much of what’s being sold as "AI" today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides:

    Prikaži ovu nit
    Poništi
  6. 29. lis 2019.

    Brain networks, dimensionality, and global signal averaging in resting-state fMRI: Hierarchical network structure results in low-dimensional spatiotemporal dynamics

    Poništi
  7. 3. lis 2019.

    I wish my undergraduate students wanted to learn math and statistics instead of blasting carbon into the atmosphere to half-train a GAN before their free AWS credits run out.

    Poništi
  8. 3. lis 2019.

    Smoothing splines define regularized least square whose solutions are sums of kernel functions (special case of reproducing Hilbert space). Linear interpolation and cubic splines are special cases.

    Poništi
  9. 27. ruj 2019.
    Poništi
  10. 25. ruj 2019.

    Friendly reminder that if you want to use your figures again and don't want to hand copyright over to a journal: 1. Share the figure with a CC0 license before you submit, effectively releasing the figure into the public domain. 2. That's it, you can always use the figure now.

    Poništi
  11. 25. ruj 2019.

    Our perspective on how machines can learn from brains is online (). Short: Use the model bias of brains to get a better inductive bias for machine learning. With , , , . Cover art by .

    Poništi
  12. After 2.5 years, we finally shipped the book off for printing. The GitHub graphs do look a bit scary... You can download the PDF (and some jupyter notebooks) from here:

    Poništi
  13. 17. ruj 2019.

    We get a LOT of questions about how best to use Dask efficiently. We now curate lists of Best Practices here: There is one page for the entire project, and one page for each of Arrays, DataFrames, and Delayed.

    Poništi
  14. I've made a cheat sheet and a bunch of applets to give you an intuitive feel for various reaction time distributions. You can choose datasets, fiddle with parameters, and see working code examples: 1/n

    Prikaži ovu nit
    Poništi
  15. Only going deep is perhaps sometimes not the most profound thought! Great work of our team member ! Congrats!

    Poništi
  16. One of the great pleasures of learning statistics is that you replace the nagging uncertainty about whether you are doing the right analysis with the nagging uncertainty about whether there is such a thing as the right analysis.

    Poništi
  17. 21. srp 2019.

    This is precisely why uncertainty quantification should be the rule and not the exception. Ever heard of probabilistic machine learning?

    Poništi
  18. 11. srp 2019.

    Fixed slide deck: I'm removing the previous one, that had errors, sorry for the broken links

    Prikaži ovu nit
    Poništi
  19. 10. srp 2019.

    fact: Common misconception: When adjusting for variables, the bias does not necessarily decrease with every new covariate of no interest incorporated into the analysis

    Poništi
  20. 9. srp 2019.

    Nice review of changes in statistical culture in the Era of big data by , & Steve Smith - regularization & dimensionality-reduction - empirical validation through open data - focus on accuracy rather than interpretability

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·