scalar -> 0D tensor vector -> 1D tensor matrix -> 2D tensor array -> 3D+ tensors intercept -> bias coefficients -> weights link function -> activation function negative log likelihood -> cross entropy loss
-
-
Prikaži ovu nit
-
ridge penalty -> weight decay iteration -> epoch str_split() -> tokenization linear binary classifier -> perceptron iterative gradient descent -> backpropagation
Prikaži ovu nit -
Unrelated to this list: shoutout to RStudio cloud capabilities: when hotel wifi didn’t provide sufficient signal, many of us resorted to mobile hotspots, effectively performing deep learning on GPUs from our phones. 2020 and all that jazz
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
-
-
To be fair many of the terms you are calling “fancy pants” are actually really old and derive from the comp. sci and engineering world. It’s an interesting observation that there is a really deep historic split with stats that allowed this parallel vocabulary to develop
-
Spot on! Further, maths people (esp. inverse) have yet another vocabulary for overlapping themes (e.g. Tikhonov regularization). But I’d say backprop is one way of implementing iterative grad descent on an NN.
Kraj razgovora
Novi razgovor -
-
-
This list is spot on and also pretty helpful!
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
manager

