So, the multi-speaker neural TTS models have gotten eerily good; the few-sample/voice-cloning NNs are also now credible with a few minutes of samples. Has anyone told the various media fandoms they could get their favorite characters' voices on tap with some elbow-grease...?
-
-
And there's a pretty interesting argument on e.g. what the difference of an impersonator doing an impression of a character is as an artistic work versus a (indistinguishable?) simulacrum of the character is. Is that *even more* transformative, or less so? Is it *too* good?
-
I'm wincing thinking about fueling the 'derivative vs transformative use of dataset' argument that is *still* unresolved for deep learning & ML in general - does a trained model, and its outputs, constitute a new work, or is it derivative of *all* copyrights anywhere in training?
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.