Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @taehobyo
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @taehobyo
-
Theodore proslijedio/la je Tweet
-2007, The Road to Quantum Artificial Intelligence https://arxiv.org/pdf/0705.3360.pdf … -2015, Quantum algorithms: an overview https://arxiv.org/pdf/1511.04206.pdf … -2018, Machine learning & artificial intelligence in the quantum domain: a review of recent progress(download not pdf) https://iopscience.iop.org/article/10.1088/1361-6633/aab406 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
Some folks still seem confused about what deep learning is. Here is a definition: DL is constructing networks of parameterized functional modules & training them from examples using gradient-based optimization.... https://www.facebook.com/722677142/posts/10156463919392143/ …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
We've just open-sourced the code for Stacked Capsule Autoencoders (NeurIPS '19): https://github.com/google-research/google-research/tree/master/stacked_capsule_autoencoders … joint work with
@sabour_sara,@yeewhye and@geoffreyhintonHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
80 years of AI research represented as connectionist (neural nets) vs. symbolic (rule-based). http://bit.ly/2PBqk65 v/
@Karmacoma@jphcoi@mazierespic.twitter.com/VMF92TqueJ
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
DiffTaichi: Differentiable Programming for Physical Simulation “Using our differentiable programs, neural network controllers are typically optimized within only tens of iterations.” When we have good priors about the world, it makes sense to use them! https://arxiv.org/abs/1910.00935 pic.twitter.com/Xr0Yumj16N
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
The https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0194889 … paper with its finding that the worse Stat forecasting method was more accurate than the best of the ML ones has passed the 100,000 mark of views/downloads. None of those who have read/downloaded it has challenged its finding. We are still waiting!pic.twitter.com/3Y7iC51CXm
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
"Are labels required for improving adversarial robustness?" TL;DR: no! With only 10% of CIFAR10 labels, UAT has almost no drop in robust accuracy. With additional unlabeled data, UAT obtains SOTA robust accuracy. Paper: https://arxiv.org/abs/1905.13725 Code: https://github.com/deepmind/deepmind-research/tree/master/unsupervised_adversarial_training …pic.twitter.com/DToqzeMm8l
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
The fact that evaluating ∇f(x) is as fast as f(x) is very important and often misunderstood http://timvieira.github.io/blog/post/2016/09/25/evaluating-fx-is-as-fast-as-fx/ …https://twitter.com/gabrielpeyre/status/1167663307668373504 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
Thrilled to be able to share what I've been working on for the last year - solving the fundamental equations of quantum mechanics with deep learning! https://arxiv.org/abs/1909.02487
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
In our new blog post, we review how brains replay experiences to strengthen memories, and how researchers use the same principle to train better AI systems:https://deepmind.com/blog/article/replay-in-biological-and-artificial-neural-networks …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
Can we scale gradient-based meta-learning? In Warped Gradient Descent, we meta-learn a geometry over the joint task parameter distribution. We can learn optimisers for RNNs and against catastrophic forgetting. W/ A. Rusu,
@rpascanu, H. Yin,@RaiaHadsell.
https://bit.ly/2ZLjVtH pic.twitter.com/vGjdf3Nm7C
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
Project Euphonia is a speech-to-text transcription model for those with atypical speech. In a new
@interspeech2019 paper, learn how researchers are collaborating with the#ALS community to develop Euphonia for those with ALS or other speech impairments.https://goo.gle/2YLIS8hHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
GENESIS is the first fully probabilistic model for unsupervised image segmentation with amortized inference, developed by
@martinengelcke,@IngmarPosner, O. Parker-Jones and myself: https://arxiv.org/abs/1907.13052 pic.twitter.com/H3NSaygkZP
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
We recently developed a unified service system that allows Population Based Training to be scaled to diverse machine learning applications within Alphabet. We'll be presenting this paper at
@kdd_news, August 2019 in Anchorage, Alaska! https://arxiv.org/abs/1902.01894 pic.twitter.com/gWR2lBFT1i
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
The conventional wisdom that Gaussian processes are slow was broadly true 10 years ago, but not today. In GPyTorch, for example, even an exact GP on >100k points can only take minutes. By contrast, a modern NN on 50k points takes ~8 hours with a good GPU.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
Looking forward to giving a talk at UAI2019! Thanks to
@VibhavGogate@ryan_p_adams for inviting me, and my wonderful collaborators at@DeepMindAI. Talk will be about KL regularised RL, multitask learning, meta learning, and neural processes. Probabilistic learning FTW!pic.twitter.com/y3jM57OIAn
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
Starting fall 2019, I will join the Department of Biomedical Engineering at McGill's Faculty of Medicine. Please mail me (dan??obz??k@gmail.com) for PhD, post-doc, or research engineer positions in my new group in Montreal.
@Neuro_Skeptic@neuroconscience@TheNeuro_MNIpic.twitter.com/DSBZGO0ASA
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je Tweet
I don't see any Julian Jaynes references...
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Theodore proslijedio/la je TweetHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Theodore proslijedio/la je Tweet
Subspace Inference for Bayesian Deep Learning. In our new
#UAI2019 paper we construct low dimensional subspaces for scalable Bayesian inference on modern deep nets (with code)! Mode connectivity makes a guest appearance. https://arxiv.org/abs/1907.07504 pic.twitter.com/OeFoegMrZXHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.