Liang-Chi Hsieh

@viirya

Data Engineering and Machine Learning; CS Ph.D. in Multimedia Analysis

Vrijeme pridruživanja: svibanj 2007.

Tweetovi

Blokirali ste korisnika/cu @viirya

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @viirya

  1. proslijedio/la je Tweet
    25. sij

    Excellent and perceptive article by on the state of AI and whether machines could ever attain human-level intelligence or consciousness. Features discussion of recent books by & Ernest Davis; Christof Koch; and me.

    Poništi
  2. proslijedio/la je Tweet
    24. pro 2019.

    [ANNOUNCEMENT] The Apache Spark 3.0 Preview 2 is here! It is the second preview release. Try it now and let us know what you think! View the release notes Happy Holidays!

    Poništi
  3. proslijedio/la je Tweet
    20. stu 2019.

    OpenJDK Startup - Late 2019 Update: 40% less memory, 40% less CPU

    Poništi
  4. proslijedio/la je Tweet
    28. lis 2019.

    Introduction to Adversarial Machine Learning A tutorial by that presents an overview of current approaches for adversarial attacks and defenses in the literature.

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    28. lis 2019.

    Great results finding DNN optimizations automatically by at today. Zhihao is also on the academic job market this year. More info and source code:

    Poništi
  6. proslijedio/la je Tweet
    16. lis 2019.
    Poništi
  7. proslijedio/la je Tweet

    This is the rejection letter for the work that just won the Nobel Prize. Believe in yourself. Everyone else will catch up eventually.

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    11. lis 2019.

    New research demonstrates how a model for multilingual of 100+ languages trained with a single massive significantly improves performance on both low- and high-resource language translation. Read all about it at:

    Poništi
  9. proslijedio/la je Tweet
    25. ruj 2019.
    Odgovor korisniku/ci

    No opinion on favorite or not, but this paper , , & I submitted to NeurIPS'14 was rejected (~2K citations): Distilling the Knowledge in a Neural Network 2/3 said "1: This work is incremental and unlikely to have much impact"

    Poništi
  10. proslijedio/la je Tweet
    21. ruj 2019.
    Poništi
  11. proslijedio/la je Tweet

    We see more significant improvements from training data distribution search (data splits + oversampling factor ratios) than neural architecture search. The latter is so overrated :)

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    15. ruj 2019.

    Simple statistical methods are shown to much better than fancy machine learning on a whole bunch of real-world sequence-prediction datasets. The reason: the time series used are tiny by ML standards, and all the ML methods overfit.

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    13. ruj 2019.

    New EMNLP paper “Investigating Multilingual NMT Representation at Scale” w/ , , @caswell_isaac, . We study transfer in massively multilingual NMT from the perspective of representational similarity. Paper: 1/n

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    11. ruj 2019.

    New research into cross-modal learning applies to video content, enabling self-supervised training of to understand high-level semantic features in video that occur over long time frames. Learn more at ↓

    Poništi
  15. proslijedio/la je Tweet
    6. ruj 2019.

    Today, we’re happy to release two new natural language dialog datasets, which capture the richness of natural dialog, for use in training more effective digital assistants that can understand complex language. Learn more and grab the data at ↓

    Poništi
  16. proslijedio/la je Tweet
    27. kol 2019.

    Can neural network architectures alone, without learning any weight parameters, encode solutions for a given task? We search for “weight agnostic neural network” architectures that can perform various tasks even when using random weight values. Learn more→

    Poništi
  17. proslijedio/la je Tweet
    22. kol 2019.

    Nice case study from HyperloopOne about running pandas code on with the new Koalas library: . No more complex rewrite into Spark DataFrames needed.

    Poništi
  18. proslijedio/la je Tweet
    14. kol 2019.

    Google’s sibling DeepMind lost $572 million last year. What does it mean? Some thoughts I wrote for .

    Poništi
  19. proslijedio/la je Tweet
    16. kol 2019.

    Speaker diarization—separating speech from different speakers—is critical for joint speech recognition. New research based on a recurrent neural network transducer architecture improves diarization performance by a factor of ~10. Learn how it's done here:

    Poništi
  20. proslijedio/la je Tweet
    13. kol 2019.

    SQL on Hadoop friends: We're publishing a HadoopDB retrospective in VLDB 2019. Section 5 speaks more broadly about the current SQL on Hadoop ecosystem. Please let me know if we're missing anything important. Camera ready (final vers) of paper due tomorrow.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·