PSA, posting “[insert LM here] can’t do [thing]” is a double L. First L because it might be able to with better coaxing, second L because you showed you didn’t understand the first.
OTOH it’s fine and good to post like “I couldn’t get LM to do the thing; here’s what I tried.”
Conversation
for sure, but that's the issue with LLMs: regular people and even some AI researchers are "still making basic philosophical category mistakes about them" (from linked post) - treating LLMs as agents that carry out commands w/o proper conditioning
1
2
Well said! I find there are a lot of people who want this thing to fail and are all too eager once they see examples of failures. I like to take those failures and see how I might make the LLM succeed at the task (and often it can)
1
Discover more
Sourced from across Twitter
can't shake the feeling that this paper is a big deal. i've never seen this level of geometric awareness of arbitrary moving objects, and the "quasi-3d canonical volume" mentioned feels a lot closer to our human quasi-3d visual perception than something like nerf
Quote Tweet
Tracking Everything Everywhere All at Once
paper page: huggingface.co/papers/2306.05
present a new test-time optimization method for estimating dense and long-range motion from a video sequence. Prior optical flow or particle video tracking algorithms typically operate within limited… Show more
5:18
51.4K views
11
47
464
Show this thread









