Conversation

Replying to
For general purpose, I've been pleased with the RTX 3060. Works well with tensorflow. Had to fight to get pytorch to work, but it works. Darknet was painless to get working with CUDA 10. What it lacks in speed it makes up with memory (12G). Usually sub $1K.
1
Show replies
Replying to
If your question is "how can I train the most/largest models the fastest for under 10k", cloud services are the answer. An A100 costs ~10K, but you can rent them from (say) Google for under $3 / hour. Only worth buying if you have very consistent workload.
1