NimTorch

@NimTorch

Pytorch - Py + Nim = A Nim frontend for pytorch

Vrijeme pridruživanja: rujan 2018.

Tweetovi

Blokirali ste korisnika/cu @NimTorch

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @NimTorch

  1. proslijedio/la je Tweet
    4. velj

    Sounds like you guys need stuff like ! If I only had some support to keep developing it! But everyone seems stuck to bad tools and bad pipelines toasting their productivity...

    Poništi
  2. proslijedio/la je Tweet
    4. sij

    Since the 1 Neuron XOR became a topic… 1 hidden neuron XOR Network evolved using simulated annealing (In less then a second.. could be better) Basically a "sin" activation seems to be the key

    Poništi
  3. proslijedio/la je Tweet
    5. pro 2019.

    I didn't expect it but actually this is a good read for anyone curious on where "AI" is and it's limits.

    Poništi
  4. proslijedio/la je Tweet
    24. srp 2019.

    Impressive work. Very efficient way of doing style transfer. Style Transfer with GANs on HD Images - Towards Data Science

    Poništi
  5. proslijedio/la je Tweet
    17. srp 2019.

    Finally people is starting to talk about spiking neural networks! Thanks and

    Poništi
  6. proslijedio/la je Tweet
    26. lip 2019.

    marketing before benchmarks, also highly suspicious...

    Poništi
  7. proslijedio/la je Tweet
    11. lip 2019.

    We are working on a product very much inspired by this and neuroevolution in general. MNIST was a huge challenge for me in terms of CPU optimizations. Given you work at google and google cloud is probably cheap for you :) I wonder, how many VMs you had to throw at it?

    Poništi
  8. 7. lip 2019.

    With newruntime and destructors there is a lot of potential!

    Poništi
  9. proslijedio/la je Tweet
    27. svi 2019.

    With nimline by , and work wonderfully together. I uploaded a quick sample

    Poništi
  10. 22. sij 2019.

    This is the magic behind NimTorch and the secret ingredient for seamless native performance.

    Poništi
  11. 17. sij 2019.

    We are thinking to release C automated bindings to via . The magic lies behind ability to generate C headers out of generated code. Potentially any language (e.g. ) can consume them and harness the power of low-level .

    Poništi
  12. proslijedio/la je Tweet
    14. sij 2019.
    Odgovor korisnicima

    I know I may sound like a preacher but I'm a productivity advocate and so, actually you could also use nimpy And write a cuda kernel straight in and

    Poništi
  13. proslijedio/la je Tweet
    14. sij 2019.

    As usual training remains something only super entities can do. While this is not necessarily the case! This is exactly my inspiration and motivation to push Because being able to train a partially trained network on a small device definitely should be possible.

    Poništi
  14. proslijedio/la je Tweet
    13. sij 2019.

    Another teaser of the plugin we are going to release.

    Poništi
  15. proslijedio/la je Tweet
    12. sij 2019.

    A small teaser of the first plug-in we are going to release soon! Powered by and ATen/C10 ()

    Poništi
  16. proslijedio/la je Tweet
    10. sij 2019.

    I'm surprised nobody in the community noticed we have a multiplatform (arm, x86, any) SIMD vectors library (using and tested gcc, clang, vcc amazing vectorization). Doubt any other language could come up with such easy to use wide vectors.

    Poništi
  17. 5. sij 2019.

    Btw! We smashed our autograd compile time by a great deal in the latest release! itself is blazing fast. But we are doing a lot of magic to be 1:1 on par with so it used to be a bit slow. Not so much anymore!

    Poništi
  18. 5. sij 2019.

    We added two new types of builds for aten and nimtorch on . lite: very small runtime (low perf tho, as it's not including any MKL/Cuda etc), statically linked (produce a single binary no deps!) static: similar to lite but includes Intl MKL (windows and linux only)

    Poništi
  19. proslijedio/la je Tweet
    3. sij 2019.
    Odgovor korisniku/ci

    At least in the (or ) case, Intel MKL is order of magnitudes faster then OpenBLAS anyway. And you are right, it might require some LD_PRELOAD magic, unless linked properly, but totally worth it.

    Poništi
  20. proslijedio/la je Tweet
    30. pro 2018.

    Follow up: It looks like is including and linking mkl-dnn which we exclude on purpose in mkl-dnn library fails miserably to load and work when used in a kvm virtual machine, this means at the moment PyTorch cannot run on a VM properly. (i7-7700K here)

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·