Opens profile photo
Follow
Jan Hendrik Kirchner
@janhkirchner
phd student in comp neuroscience @ mpi brain research frankfurt, universalprior.substack.com, ➡️ waluigi theorist
Science & TechnologySan Francisco, CAJoined March 2018

Jan Hendrik Kirchner’s Tweets

Pinned Tweet
I've spent a two-week vacation to fine-tune a large language model on my writing from the last decade to produce what I lovingly call #IAN (intelligence artificielle neuronale). I wrote a Substack post about what it is and how I made it! Check it out :)
5
67
Show this thread
𝗖𝗹𝘂𝘀𝘁𝗲𝗿𝗶𝗻𝗴 𝗶𝘀 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗹𝘆 𝗶𝗺𝗽𝗼𝘀𝘀𝗶𝗯𝗹𝗲! Kleinberg (2002) stated three axioms that any clustering procedure should satisfy and showed there is no clustering procedure that simultaneously satisfies all three. Intuition for this striking result👇
Image
Image
9
426
sampling only ever demonstrates the presence, not the absence of capability. would be surprised if sota models couldn’t play tic-tac-toe optimally after finetuning (which surfaces capabilities that are already there)
Quote Tweet
When @GaryMarcus and others point out that GPT-4 is bad at chess and therefore not close to AGI, it falls flat for me. But when I can’t coax GPT-4 to defeat me at *tic-tac-toe*, I start to think there’s something even more deeply wrong than I realized. poe.com/s/KxQMDTGMzBIT
Show this thread
6
8
dang
Quote Tweet
Replying to @ESYudkowsky and @davidad
I wanted to understand this exchange better. I quoted the two tweets in #chatgpt4 and asked: Can you please explain what their disagreement is, and which position is more rational. This was the response: The first tweet suggests that a type of machine learning model,… Show more
1
Commentary at greater length: - I'm encouraged that somebody ran right out and tried this. - It's not clear (to me, yet) that it worked all that well, or better than expected; I have not yet signficantly updated my model of how technically hard interpretability is. - It is… Show more
Quote Tweet
new research from OpenAI used gpt4 to label all 307,200 neurons in gpt2, labeling each with plain english descriptions of the role each neuron plays in the model. this opens up a new direction in explainability and alignment in AI, helping make models more explainable and… Show more
Show this thread
Image
Image
Image
Image
63
1,018
“Ok but your GPT agent must have successfully completed a task by now?” “nope” “It has no technical ability. The confidence it has along with the ability to express what it feels, has proven helpful.”
Image
Image
16
325
This one was a ton of fun to write and work on in general!
Quote Tweet
@janhkirchner has written a beautiful piece about the procrastination support group that he organized for me universalprior.substack.com/p/simulator-mu
3
At EAG SF and interested in AI safety? Stop by OpenAI at the career fair or DM me. We're hiring across teams (incl. Trust and Safety, Security) - and we are always interested in hearing what you want to work on & what you think we should work on!
1
24
it’s been a blast reading Ava’s thoughts on this! She has a fantastic knack at getting art out of DALLE
Quote Tweet
Over the past few weeks, I've been wondering about the impact & utility of DALL-E & related algorithms. My awesome friend @janhkirchner just joined @OpenAI & I got to play around with DALL-E. With that, here's a piece I wrote for Jan's substack. Enjoy! universalprior.substack.com/p/hello-dall-e
1
2
Over the past few weeks, I've been wondering about the impact & utility of DALL-E & related algorithms. My awesome friend just joined & I got to play around with DALL-E. With that, here's a piece I wrote for Jan's substack. Enjoy!
4
When I was young(er) I started coding because I wanted to build AI. That's pretty difficult, so I pivoted to "being part of the team that builds AGI". Now I'm happy to announce that I'm approaching my goal - I've joined OpenAI (Alignment Team) 🥳 Looking forward to exciting times
8
98
Show this thread