Tweetovi

Blokirali ste korisnika/cu @vingovan

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @vingovan

  1. proslijedio/la je Tweet
    29. sij

    This video explains 's amazing new Meena chatbot! An Evolved Transformer with 2.6B parameters on 341 GB / 40B words of conversation data to achieves remarkable chatbot performance! "Horses go to Hayvard!"

    Poništi
  2. proslijedio/la je Tweet
    26. sij

    If you want to learn about privacy-preserving machine learning, then there is no better resource than this step-by-step notebook tutorial by . From the basics of private deep learning to building secure ML classifiers using PyTorch & PySyft.

    Poništi
  3. proslijedio/la je Tweet
    23. sij

    Backpropagation and labeled data are the bread and butter of deep learning. But recent research from the University of Amsterdam suggests neither is necessary to train effective neural networks to represent complex data:

    Poništi
  4. proslijedio/la je Tweet
    7. sij

    Keras inventor Chollet charts a new direction for AI: a Q&A | ZDNet

    Poništi
  5. 5. sij
    Poništi
  6. 2. stu 2019.
    Poništi
  7. proslijedio/la je Tweet
    28. lis 2019.

    Eric Schmidt on state-of-the-art in AI: - Pretty much all interesting AI approaches involve GANs in the middle - Speech and images are solved problems - TensorFlow is being used by everyone - We know data has bias. You don't need to yell that as a new fact.

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    29. ruj 2019.

    AI curriculum 🤖 CS231n: CNNs for Visual Recognition, Stanford | Spring 2019 CS224n: NLP with Deep Learning, Stanford | Winter 2019 CS285: Deep Reinforcement Learning, UC Berkeley | Fall 2019

    Poništi
  9. proslijedio/la je Tweet
    25. srp 2019.

    The remaining lectures from my Natural Language Understanding course are now up: I have highest hopes for the contextual word reps one; I tried to methodically walk through those models with diagrams, to supplement the great tutorials already out there.

    Poništi
  10. proslijedio/la je Tweet
    21. srp 2019.
    Poništi
  11. proslijedio/la je Tweet
    19. lip 2019.

    XLNet: Generalized Autoregressive Pretraining for Language Understanding: outperforming BERT on 20 tasks (SQuAD, GLUE, sentiment analysis), while integrating ideas from Transformer-XL: arxiv: code + pretrained models:

    Poništi
  12. proslijedio/la je Tweet
    1. lip 2019.

    今日はこれ見てる。Interpretabilityの技術はSaliency Maps、Occulusion Sensitivity、Class Activation Mapsがある。 Stanford CS230: Deep Learning | Autumn 2018 | Lecture 7 - Interpretabili... さんから

    Poništi
  13. proslijedio/la je Tweet
    29. svi 2019.

    PyTorch 101 Part 1: Understanding Graphs, Automatic Differentiation and Autograd

    Poništi
  14. proslijedio/la je Tweet
    22. svi 2019.

    A few weeks ago, a friend of mine asked me "Which papers can I read to catch up with the latest trends in modern NLP?". 🏃‍♂️👨‍🎓 I compiled a list of papers and resources for him 📚 and thought it would be great to share it!

    Poništi
  15. proslijedio/la je Tweet
    20. svi 2019.

    New NLP Newsletter: Marvel, Stanford & CMU NLP Playlists, Voynich, Bitter Lesson Vol. 2, ICLR 2019, Dialogue Demos (via )

    Poništi
  16. proslijedio/la je Tweet
    17. svi 2019.

    Introducing our new library FastBert. It will helps developers and data scientists develop and deploy BERT based model. Thank you and

    Poništi
  17. proslijedio/la je Tweet
    14. svi 2019.

    Pixel-aligned Implicit Function (PIFu), a new memory efficient, fully-convolutional 3D representation for recovering a fully textured surface of a clothed person from a single or multi-view image! With Shunsuke S, Zeng H, , Shigeo M,

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet

    An Introduction to Deep Reinforcement Learning (PDF book manuscript, Nov 2018)

    Poništi
  19. proslijedio/la je Tweet

    Course 2 of the Specialization is now available on ! You’ll learn how to use TF to handle real-world data, avoid overfitting w/augmentation and dropout, and more. You can take the course for $49 or audit it for free:

    Poništi
  20. proslijedio/la je Tweet
    15. tra 2019.

    Slides & articles from the Workshop on Responsible Recommendation Systems

    “Let Me Tell You Who You are” — Explaining Recommender Systems by Opening Black Box User Profiles
David Graus, Maya Sappelli, Dung Manh Chu
article
Fair Lending Needs Explainable Models for Responsible Recommendation
Jiahao Chen
article
Synthetic Attribute Data for Evaluating Consumer-side Fairness
Robin Burke, Jackson Kontny, Nasim Sonboli
article, slide
The Role of Differential Privacy in GDPR Compliance
Rachel Cum
    Using Image Fairness Representations in Diversity-Based Re-ranking for Recommendations
Chen Karako, Putra Manggala
article, slide
Fairness-Aware Recommendation of Information Curators
Ziwei Zhu, Jianling Wang, Yin Zhang, James Caverlee
article, slide
A Fairness-aware Hybrid Recommender System
Golnoosh Farnadi, Pigi Kouki, Spencer K. Thompson, Sriram Srinivasan, Lise Getoor
article
Personalizing Fairness-aware Re-rank
    Assessing and Addressing Algorithmic Bias — But Before We Get There
Jean Garcia-Gathright, Aaron Springer, Henriette Cramer
article, slide
Keynote Talk: What Does it Mean for an Algorithm to be Ethical? Connecting Ethics to Policy and Design
Shalaleh Rismani, Generation R; and Open Roboethics Institute
abstract, slide
    Special Talk: European Public Broadcasters Path towards Public Service Recommender Systems
Pierre-Nicolas Schwab, the Chairman of the Big Data Initiative, European Broadcasting Union, Geneva, Switzerland
abstract, slide
The Case for Public Service Recommender Algorithms
Benjamin Fields, Rhia Jones, Tim Cowlishaw
article, slide
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·