AdA

@adantro

PhD candidate | MSc | Closet | Co-founder | African

46°53'43.0"S 37°45'10.6"E
Vrijeme pridruživanja: rujan 2009.

Tweetovi

Blokirali ste korisnika/cu @adantro

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @adantro

  1. proslijedio/la je Tweet
    prije 10 sati

    Join us online in our meeting on Dimensionality Reduction and Population Dynamics in Neural Data Feb 11-14. The talks (well, most of them) are going to be streamed. Use the link in the meeting website below:

    Poništi
  2. prije 8 sati

    I found this a very interesting interview. Gets at what is the purpose, and mistakes, of - and in general, journalism in Much respect for the measure of [The Daily] The Lessons of 2016 via

    Poništi
  3. 31. sij

    Good thread, with ppl in the business, discussing the very problematic article on and by

    Poništi
  4. 31. sij

    anyone got some examples of "well written" scientific code? Whatever your field, could you point to some open code where you think "wow, this is really nicely done." Not necessarily fancy or anything, just good practice! 🙏 for guidance&inspiration

    Poništi
  5. proslijedio/la je Tweet
    29. sij

    Excited to share our Pre-Print🏃‍♀️🏃‍♂️Evidence of variable performance responses to the Nike 4% shoe: Definitely not a game-changer for all recreational runners via @OSFrameworkThe. Great team

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    31. sij

    On Tuesday, in my class, we have learnt that all a neural net does is stretching / contracting the space fabric. For example this 3-layer net (1 hidden layer of 100 positive neurons) gets its 5D logits (2D projections) linearly separable by the classifier hyperplanes (lines).

    Poništi
  7. 31. sij

    Another devoted, passionate (probably) murdered for being an obstacle to greed & resource pillaging.

    Poništi
  8. proslijedio/la je Tweet
    13. sij

    Las Monarcas buscando agua en Santuario El Rosario Ocampo Michoacan

    Poništi
  9. 26. sij

    All in all, an interesting idea linking &

    Prikaži ovu nit
    Poništi
  10. 26. sij

    What I find interesting is the variances in the learned latents. These are much larger for the constrained model than the 'free' model, suggesting the latter is perhaps suffering from some sort of mode collapse? Or overfitting? (12/12)

    Prikaži ovu nit
    Poništi
  11. 26. sij

    Each column is a latent. First 4 rows are factors. For a well tuned \Beta value, learns to encode each of these pretty clearly with 1 or 2 latents, whereas the unrestricted model has much more latent mixing. (11/12)

    Prikaži ovu nit
    Poništi
  12. 26. sij

    So what does this look like? Here's an example of training to encode a generated set of 'blobs'. Each blob is specified by 4 factors: position (X & Y values, 32x32 options), scale (6 options), and rotation angle (40 options). (10/12)

    Prikaži ovu nit
    Poništi
  13. 26. sij

    In their experiments, the authors say "the observed data is generated using factors of variation that are densely sampled from their respective continuous distributions." (9/12)

    Prikaži ovu nit
    Poništi
  14. 26. sij

    Note that training data needs to, in some sense, "span" the latent generator space. I.e. a good number of samples need to have been 'generated' by each factor. (8/12)

    Prikaži ovu nit
    Poništi
  15. 26. sij

    In this case the posterior is encouraged to be close (in Kulback-Leiber divergence) to a factorised prior. This terms appears in the cost function wighted by a factor, \Beta, hence the "Beta-VAE". (7/12)

    Prikaži ovu nit
    Poništi
  16. 26. sij

    The model proposed by the authors aims to achieve this by "learning statistically independent components from continuous data". As always the desired structure in the latent space is encouraged through additional constraints in the cost function. (6/12)

    Prikaži ovu nit
    Poništi
  17. 26. sij

    Think of an autoencoder as transmiting the input through hidden layer 'channels' to the output. "Redundacy is defined as the difference between the maximum entropy that a channel can transmit, and the entropy of messages actually transmitted". (5/12)

    Prikaži ovu nit
    Poništi
  18. 26. sij

    ... but not others, the latents corresponding to unchanged factors are still useful, improving generalisation. This schematic gives an idea: each prior (pi) corresponds to factor, which may compress the latent space less than 'freer' models (e.g. DQN) (4/12)

    Prikaži ovu nit
    Poništi
  19. 26. sij

    What is the imposed structure? Primarily "disentangled latent factors". This means "single latent units are sensitive to changes in single generative factors, while being relatively invariant to changes in other factors". So if context/tasks are diff i.t.o some factors... (3/12)

    Prikaži ovu nit
    Poništi
  20. 26. sij

    One goal is to improve "Automated discovery of early visual concepts from raw image data". Human babies seem good at this: they notice 'new' things. The authors cite evidence that the ventral visual system imposes structure on the neural representations which enables this. (2/12)

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·