Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @Isinlor
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @Isinlor
-
Tomasz Darmetko proslijedio/la je Tweet
The most surprising yet true thing anyone has ever pointed out to me on Wikipedia is that the Sun is so bright because it is so big. The power production rate is actually like a few lightbulbs in a box, or the heat from a compost pile or Lizard! From: https://en.wikipedia.org/wiki/Sun https://twitter.com/overbye/status/1222661409277911041 …pic.twitter.com/YZqnKmchqY
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: https://arxiv.org/abs/2001.09977 Blog: https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html …pic.twitter.com/5SOBa58qx3
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
Classifiers are secretly energy-based models! Every softmax giving p(c|x) has an unused degree of freedom, which we use to compute the input density p(x). This makes classifiers into generative models without changing the architecture. https://arxiv.org/abs/1912.03263 pic.twitter.com/IzMPxiNxFQ
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
Do you formally know Monte-Carlo and TD learning, but don't intuitively understand the difference? This is for you. https://distill.pub/2019/paths-perspective-on-value-learning/ … (with
@samgreydanus)pic.twitter.com/6RwsBjFbU9Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
The war between ML frameworks has raged on since the rebirth of deep learning. Who is winning?
@cHHillee's data analysis shows clear trends: PyTorch is winning dramatically among researchers, while Tensorflow still dominates industry.#PyTorch#Tensorflowhttps://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/ …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
The paper that introduced Batch Norm http://arxiv.org/abs/1502.03167 combines clear intuition with compelling experiments (14x speedup on ImageNet!!) So why has 'internal covariate shift' remained controversial to this day? Thread
pic.twitter.com/L0BBmo0q4t
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
The Illustrated GPT-2 (Visualizing Transformer Language Models) New blog post visually exploring the insides of the model that dazzled us with its ability to write coherently and with conviction. We also look at other applications of this type of model. https://jalammar.github.io/illustrated-gpt2/ …pic.twitter.com/nyGlAznRNF
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
Compressed everything I learned about how life sciences work in the last year (and 100+ interviews) into 6000 words:https://guzey.com/how-life-sciences-actually-work/ …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
Weight Agnostic Neural Networks
Inspired by precocial species in biology, we set out to search for neural net architectures that can already (sort of) perform various tasks even when they use random weight values.
Article: https://weightagnostic.github.io
PDF: https://arxiv.org/abs/1906.04358 pic.twitter.com/El2uzgxS5IPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
A while ago, I blogged about a simple way to think about matrices, namely as bipartite graphs. Now I’d like to share yet another way to think about matrices: tensor network diagrams! Here, familiar things have nice pictures. New blog post! https://www.math3ma.com/blog/matrices-as-tensor-network-diagrams …pic.twitter.com/6cAQP7kf4J
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
Does my unsupervised neural network learn syntax? In new
#NAACL2019 paper with@chrmanning, our "structural probe" can show that your word representations embed entire parse trees. paper: https://nlp.stanford.edu/pubs/hewitt2019structural.pdf … blog: https://nlp.stanford.edu/~johnhew/structural-probe.html … code: https://github.com/john-hewitt/structural-probes/ … 1/4pic.twitter.com/G5cHK3kJ4w
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
I've made this cheat sheet and I think it's important. Most stats 101 tests are simple linear models - including "non-parametric" tests. It's so simple we should only teach regression. Avoid confusing students with a zoo of named tests. https://lindeloev.github.io/tests-as-linear/ … 1/n
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
1/6 Deep classifiers seem to be extremely invariant to *task-relevant* changes. We can change the content of any ImageNet image, without changing model predictions over the 1000 classes at all. Blog post @ https://medium.com/@j.jacobsen/deep-classifiers-ignore-almost-everything-they-see-and-how-we-may-be-able-to-fix-it-a6888012516f …. with
@JensBehrmann Rich Zemel@MatthiasBethgePrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
This. Don't waste time on domain specific tricks. Do work on abstract & general inductive biases like smoothness, relational structure, compositionality, in/equivariance, locality, stationarity, hierarchy, causality. Do think carefully & deeply about what is lacking in AI today.https://twitter.com/seth_stafford/status/1106574805686345728 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. - Rich Sutton http://www.incompleteideas.net/IncIdeas/BitterLesson.html …
#ArtificialIntelligence#MachineLearning#DataScience#AIHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
"The Halo Drive: Fuel-free Relativistic Propulsion of Large Masses via Recycled Boomerang Photons" by
@david_kipping The idea is to use a moving black hole as a mirror that can energize laser beamed towards it. The mirrored beam is free energy. Clever!
http://coolworlds.astro.columbia.edu/halodrive_preprint.pdf …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
This is *the most important* direction of work in AI safety today. Recommendation systems have enormous power over society and politics, and we do not understand them. Who know how much these systems helped Trump and Brexit to happen.
#MachineLearning#DataSciencehttps://twitter.com/DeepMind/status/1101514121563041792 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Differentiable Programming: Rather than always writing new programs for ML, we can incorporate existing ones, enabling physics engines inside deep learning-based robotics models.
#MachineLearning#DataSciencehttps://twitter.com/FluxML/status/1100431790093844482 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Tomasz Darmetko proslijedio/la je Tweet
I made this years ago, even before http://playground.tensorflow.org/ , but I never published it because I never finished it and stuff happened... Somehow it got on Hacker News, so here it is: Backpropagation explained via scrollytelling: https://google-developers.appspot.com/machine-learning/crash-course/backprop-scroll/ …pic.twitter.com/2ELdktLAZn
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Neural Networks seem to follow a puzzlingly simple strategy to classify images. Viewing the decision-making of CNNs as a bag-of-feature strategy could explain several weird observations about CNNs.https://medium.com/bethgelab/neural-networks-seem-to-follow-a-puzzlingly-simple-strategy-to-classify-images-f4229317261f …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.