It still feels like a dream.
Yes, I am working on #Vtuber tech at Google AI!
πͺππͺ
Our new BlazePose GHUM Holistic model predicts full body and hand joint rotations for 3D avatars. Available soon at mediapipe.dev. πππ΅
Paper and live demo at for #CVPR2022.
Conversation
Replying to
2
10
71
Hereβs what my camera looks like π
1
5
44
The stars really had to align for this video. It was so hard to get a sunny weekend with no one in the office π
A couple of Googlers did see me dancing from the hallway
The cringe π«£
1
2
39
Paper is finally up. It's not official until it's on arxiv π. arxiv.org/abs/2206.11678
I never thought I'd be working with research, especially something for Vtubers and XR. I can't thank the team enough for their hard work this past year and I'm excited for what's next!π«°
1
3
25
Big fan of βs Umapyoi Densetsu dance. His energy inspires me to continue developing motion capture!
Side by side comparison:
[ A single webcam ] vs [ OptiTrack Flex + Manus PrimeII + Rokoko Smartgloves ]
We'll continue improving!
γ¦γγ΄γγ! ππ¨
2
24
103
I forgot to ask - could this program be used for Vroid model/avatar? I see the price for full body suit (mocap or Rokoko suit) is very expensive to own. So having something that is reasonably priced would be an absolute dream come true for me (and this is why I ask).
2
1
The demo actually does use VRM models π. Keep in mind, this right now is actually just the tracking tech itself, not a full app. When we release it, hopefully some app developers will add it to their products!
1
1
3
Show replies
That is pretty fantastic! Can you speak at all about how many cameras this uses, or is there some other methodology?
1
4
You can find our paper/video explanation here. The Arxiv link is still being prepared. We use a single camera.
1
1
15
Show replies



