Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @adantro
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @adantro
-
AdA proslijedio/la je Tweet
Join us online in our meeting on Dimensionality Reduction and Population Dynamics in Neural Data Feb 11-14. The talks (well, most of them) are going to be streamed. Use the link in the meeting website below: https://indico.fysik.su.se/event/6818/ pic.twitter.com/QtKLNmBTpQ
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
I found this a very interesting interview. Gets at what is the purpose, and mistakes, of
#NYTimes@nytimes - and in general, journalism in#USA2020 Much respect for the measure of@deanbaquet [The Daily] The Lessons of 2016#theDaily https://podplayer.net/?id=94521713 via@PodcastAddictpic.twitter.com/u7ybI2dlMn
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Good thread, with ppl in the business, discussing the very problematic
@WSJ article on#Genetics and#SocialSciences by@charlesmurrayhttps://twitter.com/JeremyJBerg/status/1222261456940552196 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
#sciencetwitter anyone got some examples of "well written" scientific code? Whatever your field, could you point to some open code where you think "wow, this is really nicely done." Not necessarily fancy or anything, just good practice!
for guidance&inspiration #compNeuroHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
AdA proslijedio/la je Tweet
Excited to share our Pre-Print

Evidence of variable performance responses to the Nike 4% shoe: Definitely not a game-changer for all recreational runners https://osf.io/preprints/sportrxiv/ctavy/ … via @OSFrameworkThe. Great team @ClinicRunning@cmbeaven@mattdriller@JFEsculier@blaiseduboisPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
AdA proslijedio/la je Tweet
On Tuesday, in my class, we have learnt that all a neural net does is stretching / contracting the space fabric. For example this 3-layer net (1 hidden layer of 100 positive neurons) gets its 5D logits (2D projections) linearly separable by the classifier hyperplanes (lines).pic.twitter.com/PzVLDxBTNq
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Another devoted, passionate
#conservationist (probably) murdered for being an obstacle to greed & resource pillaging.https://twitter.com/Homerogomez_g/status/1216464058146349061 …
0:33Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
AdA proslijedio/la je Tweet
Las Monarcas buscando agua en Santuario El Rosario Ocampo Michoacanpic.twitter.com/5pabAIGHV0
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
All in all, an interesting idea linking
#ML &#NeurosciencePrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
What I find interesting is the variances in the learned latents. These are much larger for the constrained model than the 'free' model, suggesting the latter is perhaps suffering from some sort of mode collapse? Or overfitting? (12/12)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Each column is a latent. First 4 rows are factors. For a well tuned \Beta value, learns to encode each of these pretty clearly with 1 or 2 latents, whereas the unrestricted model has much more latent mixing. (11/12)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
So what does this look like? Here's an example of training to encode a generated set of 'blobs'. Each blob is specified by 4 factors: position (X & Y values, 32x32 options), scale (6 options), and rotation angle (40 options). (10/12)pic.twitter.com/t9DVSEVgYf
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
In their experiments, the authors say "the observed data is generated using factors of variation that are densely sampled from their respective continuous distributions." (9/12)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Note that training data needs to, in some sense, "span" the latent generator space. I.e. a good number of samples need to have been 'generated' by each factor. (8/12)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
In this case the posterior is encouraged to be close (in Kulback-Leiber divergence) to a factorised prior. This terms appears in the cost function wighted by a factor, \Beta, hence the "Beta-VAE". (7/12)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
The model proposed by the authors aims to achieve this by "learning statistically independent components from continuous data". As always the desired structure in the latent space is encouraged through additional constraints in the cost function. (6/12)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Think of an autoencoder as transmiting the input through hidden layer 'channels' to the output. "Redundacy is defined as the difference between the maximum entropy that a channel can transmit, and the entropy of messages actually transmitted". (5/12)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
... but not others, the latents corresponding to unchanged factors are still useful, improving generalisation. This schematic gives an idea: each prior (pi) corresponds to factor, which may compress the latent space less than 'freer' models (e.g. DQN) (4/12)pic.twitter.com/MXjMdMgqPW
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
What is the imposed structure? Primarily "disentangled latent factors". This means "single latent units are sensitive to changes in single generative factors, while being relatively invariant to changes in other factors". So if context/tasks are diff i.t.o some factors... (3/12)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
One goal is to improve "Automated discovery of early visual concepts from raw image data". Human babies seem good at this: they notice 'new' things. The authors cite evidence that the ventral visual system imposes structure on the neural representations which enables this. (2/12)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.