Conversation

Replying to
Problem is biggest models that use the biggest piles of hardware keep winning. The small ones are qualitatively always a generation behind. I suspect it will be public training of large models plus hybrid inference on augmented versions of large models. Already so in vision.
2
2
Replying to
I would imagine pricing will converge to some hardware requirement plus minor fees in image generation. Do these models have diminishing rates of return once they are good enough? So even 1-generation lag might not be enough value-add for you to pay higher price
1
Replying to
Well at the moment even with billions of parameters requiring cloud provisioning they’re not really useful yet. Still in mix of toy/demo/lab stage. Copilot, protein structure, are only 2 I’d say are close to positive ROI now. Image generation is getting there. Text not so much
2
1
Show replies
Replying to and
Search was orders of magnitude cheaper from get go. Too cheap to meter even before Google. ML is not. Training is very expensive. Even inference at the bleeding edge still seems to be in the dollars/inference range, not sub-penny. Unit economics closer to video streaming ~1997
1
3
Show replies