Tweetovi

Blokirali ste korisnika/cu @fly51fly

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @fly51fly

  1. proslijedio/la je Tweet
    19. tra 2018.

    Why yes, we did just train CIFAR10 ~2000% faster than the previous best on DAWNBench, using fastai and . :) Blog post on how we did it coming soon - but first we have some more experiments to run...

    Poništi
  2. proslijedio/la je Tweet
    18. tra 2018.

    Yet another paper using compression to get generalization bounds (on large neural networks). One difference, this paper does the hard work to actually turn them into nonvacuous bounds. I think this step is important.

    Poništi
  3. proslijedio/la je Tweet

    The integration of TensorFlow with 's TensorRT optimizes neural network models & speeds up inference across GPU-accelerated platforms. Users can expect high inference performance plus a near transparent workflow Learn more on the TensorFlow blog ↓

    Poništi
  4. proslijedio/la je Tweet

    Nice work from OpenAI on evolving loss functions to quickly master new tasks:

    Poništi
  5. proslijedio/la je Tweet
    16. tra 2018.

    Fantastic new paper from FAIR folks: Mechanical Turker Descent or a procedure to collect high quality annotations from multiple Turkers.

    Poništi
  6. proslijedio/la je Tweet
    17. tra 2018.

    Self-driving cars are often used an example of how adversarial attacks can do harm in the real world. In our new preprint, @samfin55, , and I argue that medicine is the perfect storm of incentive + opportunity for adversarial attacks:

    Poništi
  7. proslijedio/la je Tweet
    16. tra 2018.
    Poništi
  8. proslijedio/la je Tweet
    17. tra 2018.

    “This result leads to a new way of initializing gate biases in LSTMs and GRUs. This new ‘Chrono Initialization’ is shown to greatly improve learning of long term dependencies, with minimal implementation effort.”

    Poništi
  9. proslijedio/la je Tweet
    16. tra 2018.

    I am a huge fan of any attempts to SIMPLIFY existing approaches. We need more of this, rather than the overfitting circus that seems to be the norm. The unreasonable effectiveness of the forget gate

    Poništi
  10. proslijedio/la je Tweet
    16. tra 2018.

    This week's winner looks at uses decision trees to study dementia outcomes

    Poništi
  11. proslijedio/la je Tweet
    16. tra 2018.

    RNN's gating mechanisms can be derived from the invariance to general time transformations. Based on this insight they proposed a new initialization "chrono initialization" that adjust the biases of gates, that can deal with long-term dependencies.

    Poništi
  12. proslijedio/la je Tweet
    17. tra 2018.

    Released Chainer/CuPy v4.0.0! : Major performance improvements including TensorCore support and iDeep backend, NCCL2 support, Caffe export. : CUDA 9.1 support, wheel package, FFT support, etc. More in the blog post and release notes.

    Poništi
  13. proslijedio/la je Tweet
    17. tra 2018.
    Poništi
  14. proslijedio/la je Tweet
    17. tra 2018.

    Summary of winning solution for Data Science Bowl 2018 - modified U-Net > Mask-RCNN - add borders btw cells as targets - heavy augmentations - deep encoders: DPN-92, Resnet-152, InceptionResnetV2 - watershed + morphology for postprocessing

    Poništi
  15. proslijedio/la je Tweet

    A step-by-step walkthrough (with Keras implementation) of & Juergen's "World Models" paper:

    Poništi
  16. proslijedio/la je Tweet
    17. tra 2018.
    Poništi
  17. proslijedio/la je Tweet
    16. tra 2018.

    The unreasonable effectiveness of the forget gate: they stripped LSTM of all but the forget gate

    Poništi
  18. proslijedio/la je Tweet

    Much like boolean algebra isn't fundamental to computing & programming in an abstract sense, but rather is an arbitrary low-level design choice, I don't think brains are fundamental to intelligence. I'd posit the most compact substrate for defining cognition is mathematics.

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet

    The major finding is: ”The forecasting accuracy of ML forecasting methods is lower than the worst of statistical ones while the accuracy of more than half the ML methods is lower than a random walk”.

    Poništi
  20. proslijedio/la je Tweet
    15. tra 2018.

    MUNIT: Multimodal unsupervised image-to-image translation. Learn to translate one input dog image to a distribution of cat images without paired training data. paper: code: by

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·