Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @evolvingstuff
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @evolvingstuff
-
Radioactive data: tracing through training "makes imperceptible changes to this dataset such that any model trained on it will bear an identifiable mark. The mark is robust to strong variations such as different architectures or optimization methods" https://arxiv.org/abs/2002.00937 pic.twitter.com/IjRWA0WOcc
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
Welcome OpenAI to the PyTorch community!https://twitter.com/OpenAI/status/1222927584033247232 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
Over a million particles running real-time on the gpu. They’re attracted to each other while having a weaker desire to reassemble the image.
#creativecoding#madewithunity#indiedev#shaders#unity3d#psychobiotik#cohesionpic.twitter.com/MA3kGejxsRHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
Finally, Differentiable Physics is Here!
Full video (ours): https://youtu.be/T7w7QuYa4SQ
Source paper: https://github.com/yuanming-hu/difftaichi …
#deeplearning#ai#machinelearning#science#twominutepaperspic.twitter.com/yFeUoXJokPPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
Quaternions and Euler angles are discontinuous and difficult for neural networks to learn. They show 3D rotations have continuous representations in 5D and 6D, which are more suitable for learning. i.e. regress two vectors and apply Graham-Schmidt (GS). https://arxiv.org/abs/1812.07035 pic.twitter.com/fXUF3sgkTT
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
20 million connections have been mapped between 25,000 neurons in the fruit fly brain by researchers at
@HHMIJanelia. “It’s a landmark,” says our Clay Reid who is working on a similar effort in the mouse brain.https://www.wired.com/story/most-complete-brain-map-ever-is-here-a-flys-connectome/ …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
The quiet semisupervised revolution continueshttps://twitter.com/D_Berthelot_ML/status/1219823580654948353 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
Flax: A neural network library for JAX designed for flexibility (pre-release) https://github.com/google-research/flax/tree/prerelease …pic.twitter.com/MtUnse784E
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Reformer: The Efficient Transformer "we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L^2) to O(L log L), where L is the length of the sequence" paper: https://arxiv.org/abs/2001.04451v1 … code:https://github.com/google/trax/tree/master/trax/models/reformer …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
https://arxiv.org/abs/2001.04413 Cool theory paper presenting a problem that: - can be efficiently learned by SGD with a DenseNet with x^2 nonlin, - cannot be efficiently learned by any kernel method, including NTK.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
....The reason for this is also why it's more efficient for human engineers to build AI systems through machine learning than through direct programming. The price is training data.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
It is more efficient for evolution to specify the behavior of an intelligent organism by encoding an objective to be optimized by learning than by directly encoding a behavior. The price is learning time. The reason... https://www.facebook.com/722677142/posts/10156524222807143/ …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
A fascinating new Nature paper from
@DeepMindAI hypothesizes (and shows supporting data!) about how state of the art reinforcement-learning algorithms may explain how dopamine works in our brains.https://www.technologyreview.com/s/615054/deepmind-ai-reiforcement-learning-reveals-dopamine-neurons-in-brain/ …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
By restructuring math expressions as a language, Facebook AI has developed the first neural network that uses symbolic reasoning to solve advanced mathematics problems. https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/ …pic.twitter.com/IL2H8ygDYC
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
Introducing Tokenizers: ultra-fast, extensible tokenization for state-of-the-art NLP
https://github.com/huggingface/tokenizers …pic.twitter.com/M8eT59A3gg
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
What an elegant idea: Choosing the Sample with Lowest Loss makes SGD Robust "in each step, first choose a set of k samples, then from these choose the one with the smallest current loss, and do an SGD-like update with this chosen sample" https://arxiv.org/abs/2001.03316
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
With recent work that enables transformer to process very long training sequences, we could be only scratching the surface of the full capabilities of self-attention networks. They may have strong inductive bias to model things like hi-res video sequences. https://twitter.com/hardmaru/status/1210912823221440514?s=21 …https://twitter.com/hardmaru/status/1210912823221440514 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
On the Relationship between Self-Attention and Convolutional Layers This work shows that attention layers can perform convolution and that they often learn to do so in practice. They also prove that a self-attention layer is as expressive as a conv layer. https://openreview.net/forum?id=HJlnC1rKPB …pic.twitter.com/iqioR9eXzU
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
DiffTaichi: Differentiable Programming for Physical Simulation "a new differentiable programming language tailored for building high-performance differentiable physical simulators" https://arxiv.org/abs/1910.00935v2 … https://github.com/yuanming-hu/difftaichi … https://www.youtube.com/watch?time_continue=1&v=Z1xvAZve9aE&feature=emb_logo …pic.twitter.com/2aQUYLybGP
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thomas Lahore proslijedio/la je Tweet
Here is how AI ate the keyboard
#CES2020pic.twitter.com/XURmajLVCMHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.