Volodymyr Miz

@mizvladimir

AI Research Scientist at Graph ML solutions for knowledge graphs and NLP PhD from Wikipedia Insights Team

Switzerland
Geregistreerd in oktober 2011

Tweets

Je hebt @mizvladimir geblokkeerd

Weet je zeker dat je deze Tweets wilt bekijken? @mizvladimir wordt niet gedeblokkeerd door Tweets te bekijken.

  1. heeft geretweet
    13 sep.

    I am super excited to announce PyG 2.0 - a new major release of PyTorch Geometric brought to you by researchers from TU Dortmund and Stanford University. This new release brings full heterogeneous graph support, GraphGym, and many other features to PyG.

    Deze collectie tonen
    Ongedaan maken
  2. heeft geretweet
    8 sep.

    If you love Graph Neural Nets, language & Wikipedia, this is for you! We open sourced data & baselines for the challenging task of generating text from knowledge graphs (& vice versa!) Source Paper

    Deze collectie tonen
    Ongedaan maken
  3. heeft geretweet
    25 aug.

    "Knowledge Graphs 2021: A Data Odyssey" history, lessons learnt, and advances of knowledge graphs, from to . (Gerhard Weikum, )

    Ongedaan maken
  4. heeft geretweet
    5 aug.

    I compiled a *short* overview of ~30 papers presented at ACL'21 - neural databases, retrieval, KG embeddings, entity linking, QA, a bunch of new datasets, and (of course) some memes ☺️

    Ongedaan maken
  5. heeft geretweet
    4 aug.

    Are Missing Links Predictable? An Inferential Benchmark for Knowledge Graph Completion

    Ongedaan maken
  6. heeft geretweet
    4 aug.
    Ongedaan maken
  7. heeft geretweet
    3 aug.

    My lectures from Spring 2021's cs124, "From Languages to Information", our Stanford undergrad course introducing NLP, IR, chatbots, recommendation systems, social networks (+ some guest lectures from !) are now online, hope they're useful!

    Ongedaan maken
  8. heeft geretweet
    10 jul.
    Deze collectie tonen
    Ongedaan maken
  9. heeft geretweet
    1 jul.

    How to train state-of-the-art sentence embeddings models? 📺 Here is a deep-dive on: - Which loss to use & how to tune it 📈 - Which type of training data you need 📁 - How to create optimal batches 🔠

    Deze collectie tonen
    Ongedaan maken
  10. heeft geretweet
    25 jun.

    Unsupervised Topic Segmentation of Meetings with BERT Embeddings pdf: unsupervised approach based on BERT embeddings, achieves a 15.5% reduction in error rate over existing unsupervised approaches applied to two popular datasets for meeting transcripts

    Ongedaan maken
  11. heeft geretweet
    14 jun.

    My Ph.D. thesis is published and available online (and in the Bonn library)! It covers two main directions of my work: similarities and embeddings of nodes and graphs. How to do it in the most scalable ways possible? Link below ;)

    Ongedaan maken
  12. heeft geretweet
    9 jun.

    Our latest paper is out: GPT-2’s activations predict the degree of semantic comprehension in the human brain, by , & The summary thread below 👇 1/8

    Deze collectie tonen
    Ongedaan maken
  13. heeft geretweet
    3 jun.

    Check out this presentation at by !

    Ongedaan maken
  14. heeft geretweet
    3 jun.

    Decision Transformer: Reinforcement Learning via Sequence Modeling A nice result in the paper: By training a language model on a training dataset of random walk trajectories, it can figure out optimal trajectories by just conditioning on a large reward.

    Deze collectie tonen
    Ongedaan maken
  15. heeft geretweet
    28 mei

    New preprint!🤓 "On the Universality of GNNs on Large Random Graphs" w/ What can GNNs compute in the continuous limit? Are recent architectures more powerful than normal message-passing ones? (1/6)

    Deze collectie tonen
    Ongedaan maken
  16. heeft geretweet
    25 mei

    Microsoft has announced its first commercial use for its (exclusive) license of OpenAI's GPT-3 — an AI autocomplete tool that turns natural language into code. Details here:

    Deze collectie tonen
    Ongedaan maken
  17. heeft geretweet
    25 mei

    🍦VANiLLa: A dataset of 100k questions for question answering over knowledge graphs offering answers in natural language sentences. The answer sentences in this dataset are syntactically and semantically closer to the question than to the triple fact.

    Ongedaan maken
  18. heeft geretweet
    25 mei
    , , en 2 anderen
    Ongedaan maken
  19. heeft geretweet
    21 mei

    GPT-Neo is smaller in size compared to GPT-3 but the training data consists of science websites, stackoverflow, stackexchange and more, making it way better than GPT-3 in these domains 🦾

    Deze collectie tonen
    Ongedaan maken
  20. heeft geretweet
    13 mei

    GNNs for heterogeneous graphs and Knowledge graph embedding/completion methods. Lecture 10 of Stanford CS224W Machine Learning with Graphs course just released. Videos: Syllabus:

    Ongedaan maken

Het laden lijkt wat langer te duren.

Twitter is mogelijk overbelast of ondervindt een tijdelijke onderbreking. Probeer het opnieuw of bekijk de Twitter-status voor meer informatie.

    Je bent misschien ook geïnteresseerd in

    ·