It's easy to use deep learning to generate notes that sound like music, in the same way that it's easy to generate text that looks like natural language. But it's nearly impossible to generate *good* music that way, much like you can't generate a good 2-page story or poem
-
-
As usual with AI, this requires first understanding the subject matter by yourself, instead of blindly throwing a large dataset at a large model -- an approach which could only ever achieve local interpolation. Find the model, don't just fit a curve.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
The real question is if "greatness factor" is even well defined
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
In section 4 of the draft on how to find "minimum entropy" with an HMM I discuss music technologist, Cárthach Ó Nuanáin and his the dissertation "Connecting Time and Timbre: Computational Methods for Generative Rhythmic Loops in Symbolic and Signal Domains" using DAGS and HMMs.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
So much of what matters even just to categorize existing music is cultural (learned their from data science ppl at Spotify). Music that is entirely culturally separate can have a shockingly similar sonic signature (Eg US country and Vietnamese folk music).
-
And so much of what makes music meaningful is its cultural, social and historical context. Not just its aesthetic beauty. Think bob Dylan or punk music.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.