Conversation

i claim no expertise in technical alignment but FWIW (not much) i’m sympathetic to doomers but not decels. i think AGI will happen, ASI will follow, and this implies real risks. but i reject the idea that if those who care stop working we’ll somehow be safer.
more concretely, i see long-term alignment as continuous with near-term reliability and control. i find it implausible AGI could be low-risk if the preceding version helps idiots write malware and build bombs. it’s easy to mock today’s risks, but the future is less amusing.
2
34
people over-index on the insufferable cringe of popular doomer hyperbole. i used to be more doomy — fear begets wild thinking that i’ve learned to forgive. but top-quartile doomers are more reasonable and sincere than accs want to admit, and vice versa. i try to be nice to both.
37
Do you think there is a risk that by doing product work with LLMs, one might be speeding up capabilities without helping with alignment? E.g., instead of product work, should one go work on alignment?
1
2
there’s second-order effects to everything. my hope is that work on near-term control is good, so that’s where i try to focus (red teaming, specifically). but i don’t think anyone should feel guilty for applying today’s AI commercially, within reason; contact with reality is good
1
5
Show replies
Yep. IMO caution and responsibility is (or should be) just standard procedure for powerful new tech. Only rhetorical extremists claim safety == doom. Doom is something more specific as you say about decels.
Quote Tweet
Replying to @daniel_271828
A doomer is someone who thinks the solution to non zero pdoom is to stop all AI capability research and deployment and only do safety research.
1
Yes, e/acc and doomerism are a binary in the face of great change. There are real risks, but hysteria draws attention from them. Both deserve sympathy, and both have ideas that have merit, but to solve these problems, an understanding of both sides is needed.
1
I think the idea is just that some of them shift slightly on what they are working on. Like imagine if even 1/3 of the people currently working on capability switched to working on control/interpretability/corrigibility problems
1
4
The only reason people being optimistic about future of AGI is they just not good enough at seeing the bigger picture, reckless, or embracing the "what could go wrong mentality"
what about "stop working on AGI and work on safety instead" ? that's better than cowering in fear, and better than actively working on dangerous technology.
Show additional replies, including those that may contain offensive content
Show

Discover more

Sourced from across Twitter
The scale of modern GPU computation is so incomprehensible that I regularly find that even experts underestimate it. A 4090 can do ~150 THOUSAND fp32 ops per pixel per frame at 4k 60 Hz, and can load kilobytes for every single pixel from VRAM (and more from on-die SRAM).
12
284
Show this thread
I'm watching a second order effect right now, in my self and others () due to GPT4 the capability of each individual human has 10x'd (if they *really* try!). You can just.. figure out how bare metal works! You can just.. tear through it! Nothing is stopping you now!
14
303
Show this thread
(para) "..expert prompting, where we ask the model to suggest named experts on a given question, ask for the answer the named experts would have given and subsequently make a collective decision. Experiments show expert prompting improves performance relative to prior tech" lol
Image
Quote Tweet
Image
LLM Passes MIT Math & Computer Science -4,550 questions from the 30 MIT Math & CS courses required for a degree -New benchmark likely not in any training data On test set excluding image Qs, w/ prompt engineering: -GPT-3.5 solves 33% -GPT-4 solves 100% arxiv.org/abs/2306.08997
6
179
Show this thread