1/1
Had a great skype coffee chat with @sachitmishradev this weekend about designing and developing for #VoiceFirst interfaces.
This is one tangent of our conversation that stuck with me the last few days...
-
Show this thread
-
2/7 One question that we
#Voicecommunity people get asked commonly is the following: are we converging towards multimodal UIs (vs. voice-only, screen-only, etc.) Generally people ask about voice + screen combos, but this can be expanded to AR/VR and beyond...1 reply 0 retweets 8 likesShow this thread -
3/7 Of course, Google and Amazon have both launched screen-ful (for lack of a better term?) Voice First devices like Echo Show & Home Hub. IMO, the decision to design a voice/screen-only UX, vs. mixing and matching input/output modes, always has to be rooted in the use case.
1 reply 0 retweets 6 likesShow this thread -
4/7 One example: humans speak faster than we can type, but we can read (and visually scan/compare multiple data points) faster than we can hear. Therefore, a system that can receive speech as input and return visuals as output makes a lot of sense in certain scenarios.
1 reply 2 retweets 11 likesShow this thread -
5/7 Sachit expanded upon this and mentioned that multimodality could also be considered an intermediary step to the "endgame" vs. an ultimate goal:
1 reply 0 retweets 4 likesShow this thread -
6/7 Once a voice-only UI can sift through complicated requests, use context to identify the user's needs, *and* present streamlined audio-output in a way that does not overload user with info -- *without* having to lean on visual output to augment the response -- we've delivered.
1 reply 0 retweets 8 likesShow this thread -
7/7 Then, we truly can tailor permutations of voice|screen|multimodal UIs to the use cases that best fit. We won't need to defer to visual output in instances where the user would prefer to hear the whole response. And we can choose to use visuals in cases that it makes sense.
2 replies 0 retweets 8 likesShow this thread -
@revanhoe@LisaFalkson @NoelleLaCharite@WomenInVoice#voicecommunity@scotwestwater@SJW75@maryparks@TweeterStewart@suryavanka@nnyelloji Would love to hear other thoughts on this!@sachitmishradev, hope I captured this accurately!4 replies 0 retweets 10 likesShow this thread -
Replying to @ElleForLanguage @revanhoe and
Brielle, brilliance! Indeed the end point is
#VoiceFirst + situational images & video in context & on demand. All my designs & my advice is based on this point. We assume images are far more important then actually are.#TheIntelligenceAmpilifier is high context#VoiceFirst.1 reply 1 retweet 4 likes
Brielle, some #VoiceFirst platforms and designers have come to see cards as a solution to what a better understanding of work-to-be-done can solve—by just high context voice response. Most folks will never see these cards and this thinking set back Siri & soon to set back Google.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.