selim

@selimonder

ai researcher

Sarajevo
Vrijeme pridruživanja: kolovoz 2009.

Tweetovi

Blokirali ste korisnika/cu @selimonder

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @selimonder

  1. 29. sij
    Poništi
  2. proslijedio/la je Tweet
    22. sij

    Cursed StyleGAN2 people after a deep style transfer, makes some interesting things

    Poništi
  3. proslijedio/la je Tweet
    23. sij

    "One of the big mysteries of is why it is necessary to go offline and be unconscious." by William Wisen and Nicholas Franks w/

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    9. sij

    Music sketch diary #2. Prototyping. Generate music and synthesizers by sketching wavetables and scores.

    Poništi
  5. proslijedio/la je Tweet
    2. sij
    Poništi
  6. 31. pro 2019.
    Poništi
  7. proslijedio/la je Tweet
    26. pro 2019.

    Long exposure of a plane taking off

    Poništi
  8. 23. pro 2019.
    Poništi
  9. proslijedio/la je Tweet
    19. pro 2019.

    This animation is so pleasing to watch

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet

    Visualization of a net learning a 2D image function with four different random start parameters. Interesting how much the character of the init remains after 1200 updates.

    Poništi
  11. proslijedio/la je Tweet
    11. pro 2019.

    Entered ’s deepfake detection challenge from first place 🥳 Good luck to all! Going to sleep now 😴

    Poništi
  12. proslijedio/la je Tweet
    9. pro 2019.

    Powerful opening keynote by ! Many inspirational thoughts from developmental psychology. Curiosity and intrinsic motivation in RL have a lot of work to do.

    Poništi
  13. proslijedio/la je Tweet
    8. pro 2019.

    How do you make a VAE learn informative latent representations? The normal prior over-regularises the posterior distribution. Our idea: learn the prior, thus creating more expressive, hierarchical latent spaces. See our blog

    Poništi
  14. proslijedio/la je Tweet
    9. pro 2019.

    Unsupervised pre-training now outperforms supervised learning on ImageNet for any data regime (see figure) and also for transfer learning to Pascal VOC object detection

    , , i još njih 2
    Poništi
  15. proslijedio/la je Tweet
    7. pro 2019.

    🌌 cls() ::_:: for i=0,1600 do if(i<15)pal(i,({0,128,130,2,136,8,142,137,9,10,135,7})[i+1],1) x=rnd(128) y=rnd(128) a=atan2(x-64,y-64)+.17 d=rnd(7) pset(x+cos(a)*d,y+sin(a)*d/3-cos(a)*d/4,max(0,pget(x,y)+.87-rnd())) end circfill(64,64,5,11) flip()goto _

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    7. pro 2019.

    Note how the tail wagging ceases, so as to divert more neural activity to their processor cores, as their reality shatters.

    Poništi
  17. proslijedio/la je Tweet
    4. pro 2019.

    Glad to share our new work, . Our model can generate high-quality images reflecting the diverse styles (e.g., hairstyles, makeup) of reference images. arXiv: github: co-authors:

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    4. pro 2019.

    An era has ended.... I pretty like Chainer, with which I started working on deep learning. It has been a very possible experience for my friends and me which in some sense shaped us professionally. Still, the world (which has been made better) moves on.

    Poništi
  19. 4. pro 2019.

    unfortunately, twitter lack a tremendous amount of archer memes, am i right

    Poništi
  20. proslijedio/la je Tweet
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·