Opens profile photo
Follow
Click to Follow HimangiMittal
Himangi Mittal
@HimangiMittal
MS in Robotics (MSR) student at Carnegie Mellon University working in Vision, Robotics, and Self-Supervised Learning.
Pittsburgh, Pennsylvania, USAhimangim.github.ioJoined January 2019

Himangi Mittal’s Tweets

I will be attending #NeurIPS2022 in person this week. Feel free to DM me if you are around and want to talk about self-supervised learning, multi-modal understanding, or anything over coffee. I’d love to chat 🤖😁
5
Can we learn meaningful representations from interaction-rich, untrimmed, and multi-modal streams of audio-visual, egocentric data? 👉 Check out our #NeurIPS2022 work in Poster Session 2, 4PM - 6PM CST, Hall J # 215.
Quote Tweet
Excited to introduce our #NeurIPS2022 paper w/ Pedro Morgado, @unnatjain2010, Abhinav Gupta. We propose a method to learn meaningful representations from interaction-rich, untrimmed, multi-modal, egocentric data. Paper: arxiv.org/abs/2209.13583 Code: github.com/HimangiM/RepLAI
Show this thread
Embedded video
GIF
4
Thank you so much for tweeting about our work! Our open-source code is also available here: github.com/HimangiM/RepLAI #NeurIPS2022
Quote Tweet
🚨Cool paper alert🚨 @HimangiMittal shares a system for learning state change representations from audio sounds associated w/ object interactions in Ego4D & EPIC. Benefits downstream tasks and presents ways to isolate interactions in big video datasets arxiv.org/pdf/2209.13583
Image
3
I will present our CVPR work on "3D Reconstruction of Generic Objects Held in Hand" at Booth 129b this afternoon. Looking forward to chatting with you.
Quote Tweet
Check out @yufei_ye’s upcoming CVPR work, which shows how to reconstruct generic hand-held objects from a single RGB image. See the project page (judyye.github.io/ihoi/) for the paper, code, Colab demo to try your own images, and a video explaining the approach.
Embedded video
GIF
1
63
Show this thread
Excited to share our #CVPR2022 paper, a collaboration of & , that achieves SOTA on Online Action Detection on all benchmarks!🌟🔥 Check out our poster on Jun 24, 1430-1700, Session 4.2, ID 119. Will be in-person w/ . Drop by & say hi!👋😁
Quote Tweet
GateHUB: Gated History Unit with Background Suppression for Online Action Detection abs: arxiv.org/abs/2206.04668
Embedded video
0:17
4.3K views
24
Please visit our #ICRA2022 Oral session (10:05AM, Room 119) and poster (Room 109-126) on May 25 to discuss Self-supervised Transparent Liquid Segmentation for Robotic Pouring. For visualizations & videos, please see: sites.google.com/view/transpare
Quote Tweet
How can we learn to segment transparent liquids without manual labels? Excited to share our paper which uses image translation to segment transparent liquids such as water for robotic pouring. To appear at #ICRA2022 @CMU_Robotics sites.google.com/view/transpare (1/6)
Show this thread
Embedded video
GIF
6
Interested in when neural nets can outperform classical methods for solving PDEs? Join us at our #NeurIPS2021 poster today at 11.30am - 1.00 pm ET if you wanna know more! Poster: neurips.cc/virtual/2021/p Paper: arxiv.org/abs/2103.02138
Quote Tweet
Want to learn about the neural net complexity of solutions to elliptic PDEs whose coefficients are small neural nets? Come talk to @__tm__157 , me and @zacharylipton at our poster tomorrow (Fri), 11:30a-1p EST: neurips.cc/virtual/2021/p Thread summarizing the paper below.
1
21
Join us at #CVPR2020 from 11:00 am - 1:00 pm (PST) on Thursday 06/18 and again from 11:00pm - 1:00am (+1) PST to learn more about our work on "Just Go with the Flow: Self-Supervised Scene Flow Estimation".
Quote Tweet
Check out our #CVPR2020 oral, "Just Go with the Flow: Self-Supervised Scene Flow Estimation" which uses two self-supervised losses to learn scene flow on unannotated datasets! Project: just-go-with-the-flow.github.io Paper: arxiv.org/abs/1912.00497
Show this thread
Embedded video
GIF
6