Tweetovi

Blokirali ste korisnika/cu @jctestud

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @jctestud

  1. Prikvačeni tweet
    27. ožu 2019.

    New blog post about anomaly detection! I am trying to find Golden Retrievers in a celebrity faces dataset using a silly self-supervised task (predicting right from left). Why? let's talk about it in a thread, like the pros do... 1/7

    Prikaži ovu nit
    Poništi
  2. 10. sij

    ...and his friend, LEGO cybertruck

    Prikaži ovu nit
    Poništi
  3. 10. sij
    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    4. pro 2019.

    Our new paper, Deep Learning for Symbolic Mathematics, is now on arXiv We added *a lot* of new results compared to the original submission. With (1/7)

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    6. pro 2019.

    You MUST play AI Dungeon 2, a text adventure game run by a neural net. built it using 's huge GPT-2-1.5B model, and it will respond reasonably to just about anything you try. Such as eating the moon.

    > Whistle for one of the dragons to come back
 
You whistle for the dragon to return. It flies over to you and lands on your shoulder. It looks at you with its glowing yellow eyes and it slowly begins to sniff you. Then it licks your face. The smell is so good that you immediately start licking your own face


> Transform into a dragon

You change into a dragon and fly away. Soon, you find yourself soaring through th
    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    10. stu 2019.

    😊Self-supervised learning opens up a huge opportunity for better utilizing unlabelled data while learning in a supervised learning manner. My latest post covers many interesting ideas of self-supervised learning tasks on images, videos & control problems:

    Poništi
  7. proslijedio/la je Tweet

    We release: CamemBERT: a Tasty French Language Model (soon on arxiv) CamemBERT is trained on 138GB of French text. It establishes a new state of the art in POS tagging, Dependency Parsing and NER, and achieves strong results in NLI. Bon appétit ! [1/3]

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    27. lis 2019.

    Glad to share our paper on few-shot vid2vid where we address the scalability issue of our . Now, with 1model and as few as 1 example image provided in the test time, we could render the motion of a target subject. code coming soon.

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    23. lis 2019.

    New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD. Paper: Code/models/data/etc: Summary ⬇️ (1/14)

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    20. lis 2019.

    That Rubik's cube thing was pretty cool and all, but I just successfully installed a brand new faucet for the first time in my kitchen sink, WITHOUT any Bluetooth sensors nor LEDs in my fingertips. Zero-shot learning, y'all!

    Prikaži ovu nit
    Poništi
  11. 11. lis 2019.

    Cool idea! Reminder that you, too, are a couple of commands away from having fun with video generation. Code & tutorial here:

    Poništi
  12. proslijedio/la je Tweet

    Now the real test: having the AI generate text from a *fake* URL. It worked.

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    16. ruj 2019.

    A fascinating article by if you're interested in understanding what makes MLM models like BERT differents from LM models like GPT/GPT-2 (auto-regressive) and MT models. And conveyed in such a beautiful blog post, a master-piece of knowledge sharing!

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    13. ruj 2019.

    He was already on Twitter briefly in the 90's but there was no-one else for him to talk to back then...

    Poništi
  15. proslijedio/la je Tweet
    11. ruj 2019.

    Salesforce releases a 1.6B parameter language model, .1Bs bigger than the current leader. Genuinely cool innovation here in controllable/conditional generation, but I can't help imagine this meeting taking place: blog: code:

    Poništi
  16. proslijedio/la je Tweet

    (The “correct” area of research to watch closely is stupid large self-supervised learning or anything that finetunes on/distills from that. Other “shortcut” solutions prevalent today, while useful, are evolutionary dead ends)

    Prikaži ovu nit
    Poništi
  17. 9. srp 2019.

    Highly-semantic GAN-based autoencoder reconstructions. What a time to be alive! Awesome work by , based on 's

    Prikaži ovu nit
    Poništi
  18. 9. srp 2019.

    - Hey , what do you think about this left image? - Encoder: It is a pizza with greens and cheese - Can you draw that kind of pizza from memory? - Decoder: Of course, here we go (draws right image)

    Prikaži ovu nit
    Poništi
  19. 9. srp 2019.

    So, at the risk of oversimplifying, when it comes to representation learning, discriminative models lazily learn just enough, self-supervision way more, and GANs even more.

    Poništi
  20. proslijedio/la je Tweet
    24. lip 2019.

    New blog post: Neural Style Transfer with Adversarially Robust Classifiers I show that adversarial robustness makes neural style transfer work by default on a non-VGG architecture. Blog: Colab:

    Prikaži ovu nit
    Poništi
  21. proslijedio/la je Tweet
    17. lip 2019.

    Our 2019 Call for Papers is now open! We are seeking abstract submissions on the direct application of statistics, machine learning, deep learning, and data science to the infosec field. Please submit here:

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·