tried to avoid hyperbolic chaos by reducing the sampling temperature from 1.0 to 0.8. now the neural net is playing it safer (arguably TOO safe in those 1st two patterns), but is it still going hyperbolic?pic.twitter.com/dYEeMBEJIV
U tweetove putem weba ili aplikacija drugih proizvođača možete dodati podatke o lokaciji, kao što su grad ili točna lokacija. Povijest lokacija tweetova uvijek možete izbrisati. Saznajte više
Lumpy top looks fabulous with red edge!https://twitter.com/paulasimone/status/1169349210036006912?s=21 …
I summarized the (faintly disquieting) story of HAT3000 here:https://twitter.com/JanelleCShane/status/1169310779289456640?s=20 …
That's pretty surprising! (I say that having no real intuitions about the size of the crotchet internet, but it does intrigue me that even going from ~350 to ~770 params yields jumps like this)
I didn’t try this exercise with the 345M - it might have been able to do it too. These patterns are noticeably more coherent than I was getting with finetuned 345M but I was also using a higher temperature setting to get more interesting pattern names.
It's amazing how much the pre-trained model already knows. When I fine-tune now I always sample prompted output like this, because it lets you get similar results and keep so much more of the rich world knowledge in the pre-trained mode.
"So many eyes" 
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.