This lovely paper from isolates a nice pattern for designing ML-enhanced interfaces (IA, not AI): design a shared representation for domain actions, so that suggestions can be fluidly/incrementally adapted alongside user-initiated action. pnas.org/content/pnas/1
Conversation
translation is a great example, I remember first experiencing type-ahead in google search and then loving how it evolved into email tab-completable suggestions & @ tagging people.
Looking forward especially to how this paradigm evolves beyond text / natural language.
3
4
There are two nice examples in the paper which are more structured! (data wrangling operations, visualization) Though both involve an intermediate textual DSL as shared representation.
Yeah, the Voyager / Vega-Lite visualization grammar is especially interesting in this context. By a small leap, infographics are still awfully hard to design. Going from excel -> graph is one paradigm. But having a -like way to cycle representations of graphical data..
1
2
It would be mighty cool if such an IA system could ideate on (in a / d3 type way) novel representations of graphical data representation themselves.
2

