It's easy to use deep learning to generate notes that sound like music, in the same way that it's easy to generate text that looks like natural language. But it's nearly impossible to generate *good* music that way, much like you can't generate a good 2-page story or poem
-
-
However, algorithms (and ML in particular) absolutely do have a role to play in music creation. What's broken is the general approach of statistical mimicry, e.g. raw deep learning. To generate good music programmatically, you need an algorithmic model of what makes music good.
Show this thread -
If you understand what makes music good with a sufficient level of clarity, you can express it in rules form, and seek to algorithmically maximize this greatness factor.
Show this thread -
As usual with AI, this requires first understanding the subject matter by yourself, instead of blindly throwing a large dataset at a large model -- an approach which could only ever achieve local interpolation. Find the model, don't just fit a curve.
Show this thread
End of conversation
New conversation -
-
-
Why is the space for music smaller? How do you quantify this? I would assume the entropy rate would be higher than for text
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.