Opens profile photo
Follow
Andy Zeng
@andyzengtweets
Research Scientist. PhD. Working on making robots smarter.
andyzeng.github.ioJoined September 2017

Andy Zeng’s Tweets

Pinned Tweet
Turns out robots can write their own code using LLMs, given natural language instructions by people! Part of the magic is hierarchical code-gen (e.g. recursively defining functions), which also improves sota on generic codegen benchmarks too. Check out 🧵from Jacky!
Quote Tweet
How can robots perform a wide variety of novel tasks from natural language? Execited to present Code as Policies - using language models to directly write robot policy code from language instructions. See paper, colabs, blog, and demos at code-as-policies.github.io long 🧵👇
Show this thread
Embedded video
0:20
36.1K views
52
You can play with Code as Policies on now! 👇
Quote Tweet
Just released our Code as Policies robot demo using @Gradio on @huggingface!! (thx @_akhaliq for the help!) You need an @OpenAI API key and ideally Codex access, but GPT-3 also works fine! The sim/video rendering may take some time so pls be patient 🙏 huggingface.co/spaces/jackyli
Show this thread
Embedded video
0:26
6K views
47
Check out the first(?) robotics demo on 🤗
Quote Tweet
Just released our Code as Policies robot demo using @Gradio on @huggingface!! (thx @_akhaliq for the help!) You need an @OpenAI API key and ideally Codex access, but GPT-3 also works fine! The sim/video rendering may take some time so pls be patient 🙏 huggingface.co/spaces/jackyli
Show this thread
Embedded video
0:26
6K views
44
Can't wait for the #NeurIPS2022 Workshop on Robot Learning? 🤖👨‍🏫 We got you! In the upcoming weeks, we will post talks of our panelists starting with & who have cool insights on "Language as Robot Middleware" 🤖💬🚀
45
Code & Data for Semantic Abstraction is out! semantic-abstraction.cs.columbia.edu We also hosted a HuggingFace demo for you to try out our multi-scale CLIP relevancy extractor: huggingface.co/spaces/huy-ha/
15
This is an incredible paper! Excited about control as program induction. Reminds me of en.m.wikipedia.org/wiki/Symbolic_ It also shows most robot tasks studied are implementable with a few lines of scripted codes, so shouldn't need much human demos or RL (given good CV/NLP)
Quote Tweet
How can robots perform a wide variety of novel tasks from natural language? Execited to present Code as Policies - using language models to directly write robot policy code from language instructions. See paper, colabs, blog, and demos at code-as-policies.github.io long 🧵👇
Show this thread
Embedded video
0:20
36.1K views
1
21
Just ask for the source code
Quote Tweet
How can robots perform a wide variety of novel tasks from natural language? Execited to present Code as Policies - using language models to directly write robot policy code from language instructions. See paper, colabs, blog, and demos at code-as-policies.github.io long 🧵👇
Show this thread
Embedded video
0:20
36.1K views
34
It's out! Even though natural language is a very useful planning medium, it can be limiting. Generating code fixes many of these limitations: code-as-policies.github.io ai.googleblog.com/2022/11/robots See great🧵describing the work in detail👇 A few examples below:
Quote Tweet
How can robots perform a wide variety of novel tasks from natural language? Execited to present Code as Policies - using language models to directly write robot policy code from language instructions. See paper, colabs, blog, and demos at code-as-policies.github.io long 🧵👇
Show this thread
Embedded video
0:20
36.1K views
1
18
Show this thread
Excited this is out! CaP uses code generation LLMs to plan and execute robotics tasks, since code is more expressive and can represent complex logics, loops, conditions, etc. I want to highlight a few contributions of this work that goes beyond vanilla program synthesis:
Quote Tweet
How can robots perform a wide variety of novel tasks from natural language? Execited to present Code as Policies - using language models to directly write robot policy code from language instructions. See paper, colabs, blog, and demos at code-as-policies.github.io long 🧵👇
Show this thread
Embedded video
0:20
36.1K views
1
54
Show this thread
“What if we just had the LLM write the code that runs the robot?” Also, a generic way (hierarchical code-gen) to improve code-writing LLM performance on OpenAI’s benchmark used for Codex — and applicable for code-gen in any domain. Great thread by with more!
Quote Tweet
How can robots perform a wide variety of novel tasks from natural language? Execited to present Code as Policies - using language models to directly write robot policy code from language instructions. See paper, colabs, blog, and demos at code-as-policies.github.io long 🧵👇
Show this thread
Embedded video
0:20
36.1K views
7
Beyond task planning, can LLMs generate robot policy code that exhibits spatial-geometric reasoning ("draw 5cm hexagon around apple"), and leverages code logic ("go in a 1.5m square until you see a coke"), all given a language instruction and without any additional training? 🧵👇
Quote Tweet
How can robots perform a wide variety of novel tasks from natural language? Execited to present Code as Policies - using language models to directly write robot policy code from language instructions. See paper, colabs, blog, and demos at code-as-policies.github.io long 🧵👇
Show this thread
Embedded video
0:20
36.1K views
1
24
Show this thread
Excited to see Inner Monologue is covered by ! Using language as common interface, we show how human and different robot modules can talk to each other, enabling closed-loop planning. This was done during my internship with an amazing team at Google Robotics!
Quote Tweet
New Video - Google’s New Robot: Your Personal Assistant! 🤖 youtu.be/Ybk8hxKeMYQ
1
27
Show this thread
Workshop on Pre-training Robot Learning final call for papers! The second and final submission window closes today (Oct 26th) at 11:59PM UTC! Submit your 4-page extended abstract; virtual-only presentations also accepted!
Quote Tweet
Announcing the 1st "Workshop on Pre-training Robot Learning" at @corl_conf, Dec 15. Fantastic lineup of speakers: Jitendra Malik, Chelsea Finn, Joseph Lim, Kristen Graumen, Abhinav Gupta, Raia Hadsell. Submit your 4-page extended abstract by September 28. sites.google.com/view/corl2022-
Embedded video
0:27
6K views
18
Montessori busy boards for robots! We're open-sourcing a toy-inspired robot learning environment for developing essential interaction, reasoning, and planning skills. Let's give our robot toddlers toys to play with before asking them for help in the kitchen ;) (1/n)
Embedded video
GIF
2
155
Show this thread
VLMaps allows "open vocabulary obstacle maps" for path planning with different robots! E.g. a drone can fly over tables, but a mobile robot may not. Both can share a VLMap of the same env, just with different object categories to index different obstacles.
Embedded video
0:23
88 views
1
4
Show this thread
VLMaps provides spatial grounding for VLMs like LSeg. Notably, when combined with code-writing LLMs, this allows navigating to spatial goals from natural language such as: "go in between the sofa and TV" or "move 3 meters to the right of the chair"
Embedded video
0:18
126 views
1
6
Show this thread
Join us for the workshop on "Pre-training Robot Learning" at CoRL 2022! Submission deadline: Sep 28, 2022. We have an incredible lineup of speakers! Website: sites.google.com/view/corl2022- 's 🧵👇
Image
Quote Tweet
Embedded video
0:27
Announcing the 1st "Workshop on Pre-training Robot Learning" at @corl_conf, Dec 15. Fantastic lineup of speakers: Jitendra Malik, Chelsea Finn, Joseph Lim, Kristen Graumen, Abhinav Gupta, Raia Hadsell. Submit your 4-page extended abstract by September 28. sites.google.com/view/corl2022-
2
31
Business profile picture
Tasks that seem simple to humans — like cleaning up a spilled drink — are actually incredibly complex for helper robots. That’s why Google Research and Everyday Robots are using language models to improve robot learning.
44
459
Show this thread
Really excited about our latest upgrades to PaLM-SayCan! Love that we get to benefit from LLM capabilities, as we start thinking more about language as robot middleware. 🧵👇 sites.research.google/palm-saycan
Quote Tweet
We have some exciting updates to SayCan! Together with the updated paper, we're adding new resources to learn more about this work: Interactive site: sites.research.google/palm-saycan Blog posts: blog.google/technology/ai/ and ai.googleblog.com/2022/08/toward Video: youtube.com/watch?v=E2R1D8
Show this thread
1
32