don't work yourself too hard!
don't take it too easy!
do just the right amount
for max utility ♪
tammy
@carad0
◆ ∪w∪ 〜
◆ AI notkilleveryoneism researcher • orxl.org
◆ wanna chat? discord DMs open (tammy#1111). i prefer text, voice is ok.
tammy’s Tweets
a short chat about realityfluid!
1
6
𝗧𝗵𝗲 𝗽𝗮𝘀𝘁 𝟲 𝗺𝗼𝗻𝘁𝗵𝘀:
“Of course, we won’t give the AI internet access”
𝘔𝘪𝘤𝘳𝘰𝘴𝘰𝘧𝘵 𝘉𝘪𝘯𝘨: 🤪
“Of course, we’ll keep it in a box”
𝘍𝘢𝘤𝘦𝘣𝘰𝘰𝘬: 😜
“Of course, we won’t build autonomous weapons”
𝘗𝘢𝘭𝘢𝘯𝘵𝘪𝘳: 😚
“Of course, we’ll coordinate and… Show more
146
459
2,790
An Evangelion dialogue explaining and contextualizing our formal-goal alignment plan, QACI
lesswrong.com/posts/i9okkiKQ
2
3
18
Orthogonal's latest research:
formalizing the QACI alignment formal-goal
lesswrong.com/posts/MR5wJpE2
2
11
if we observe being past the actual point of no return to doom, ie the red branches in carado.moe/quantum-immort, we should not flail around or be sad that we failed
we should merely go "ah, we are the us's who didn't get there", and strive to have a good time in what time is left
3
11
Show this thread
The idea with agent foundations, which I guess hasn't successfully been communicated to this day, was finding a coherent target to try to get into the system by any means (potentially including DL ones).
1
8
32
new research from OpenAI used gpt4 to label all 307,200 neurons in gpt2, labeling each with plain english descriptions of the role each neuron plays in the model.
this opens up a new direction in explainability and alignment in AI, helping make models more explainable and… Show more
92
856
3,495
Show this thread
Orthogonal's Formal-Goal Alignment theory of change (5min read)
3
8
(my fictional selves consent to being created and uncreated on the fly like this, and they even cooperate with me)
8
Show this thread







