Conversation

Replying to
Or rather a sort of image-to-language model. Reverse Dalle2. Sometimes you have to throw away an entire old way of seeing and coding and learn a new way of seeing before you can write again in an exothermic way. Ie energizing.
1
9
Running the old model in inference/only mode in substack. But training is the fun, energizing, and challenging part. Once a way of seeing is fully trained and converged they become like glasses you can take off and put back on.
1
7
It feels really awkward to deliberately write in a new way until you hit like 100k words in it. No dopamine loop. This is the seeing-eye-training. After 100k the language sorts trains on itself. I guess that’s deliberate practice.
4
Replying to
When I'm speaking with friends in ML or computer science we use so many shortcut terms for complex topics. Local vs global optimum, explore / exploit, Pareto principle, bus factor, ... The same can be conveyed to others but many of the technical terms can be surprisingly general!
1